Perovskites power up the solar industry

Tsutomu Miyasaka was on a mission to build a better solar cell. It was the early 2000s, and the Japanese scientist wanted to replace the delicate molecules that he was using to capture sunlight with a sturdier, more effective option.

So when a student told him about an unfamiliar material with unusual properties, Miyasaka had to try it. The material was “very strange,” he says, but he was always keen on testing anything that might respond to light.
Other scientists were running electricity through the material, called a perovskite, to generate light. Miyasaka, at Toin University of Yokohama in Japan, wanted to know if the material could also do the opposite: soak up sunlight and convert it into electricity. To his surprise, the idea worked. When he and his team replaced the light-sensitive components of a solar cell with a very thin layer of the perovskite, the illuminated cell pumped out a little bit of electric current.

The result, reported in 2009 in the Journal of the American Chemical Society, piqued the interest of other scientists, too. The perovskite’s properties made it (and others in the perovskite family) well-suited to efficiently generate energy from sunlight. Perhaps, some scientists thought, this perovskite might someday be able to outperform silicon, the light-absorbing material used in more than 90 percent of solar cells around the world.
Initial excitement quickly translated into promising early results. An important metric for any solar cell is how efficient it is — that is, how much of the sunlight that strikes its surface actually gets converted to electricity. By that standard, perovskite solar cells have shone, increasing in efficiency faster than any previous solar cell material in history. The meager 3.8 percent efficiency reported by Miyasaka’s team in 2009 is up to 22 percent this year. Today, the material is almost on par with silicon, which scientists have been tinkering with for more than 60 years to bring to a similar efficiency level.
“People are very excited because [perovskite’s] efficiency number has climbed so fast. It really feels like this is the thing to be working on right now,” says Jao van de Lagemaat, a chemist at the National Renewable Energy Laboratory in Golden, Colo.

Now, perovskite solar cells are at something of a crossroads. Lab studies have proved their potential: They are cheaper and easier to fabricate than time-tested silicon solar cells. Though perovskites are unlikely to completely replace silicon, the newer materials could piggyback onto existing silicon cells to create extra-effective cells. Perovskites could also harness solar energy in new applications where traditional silicon cells fall flat — as light-absorbing coatings on windows, for instance, or as solar panels that work on cloudy days or even absorb ambient sunlight indoors.

Whether perovskites can make that leap, though, depends on current research efforts to fix some drawbacks. Their tendency to degrade under heat and humidity, for example, is not a great characteristic for a product meant to spend hours in the sun. So scientists are trying to boost stability without killing efficiency.

“There are challenges, but I think we’re well on our way to getting this stuff stable enough,” says Henry Snaith, a physicist at the University of Oxford. Finding a niche for perovskites in an industry so dominated by silicon, however, requires thinking about solar energy in creative ways.

Leaping electrons
Perovskites flew under the radar for years before becoming solar stars. The first known perovskite was a mineral, calcium titanate, or CaTiO3, discovered in the 19th century. In more recent years, perovskites have expanded to a class of compounds with a similar structure and chemical recipe — a 1:1:3 ingredient ratio — that can be tweaked with different elements to make different “flavors.”

But the perovskites being studied for the light-absorbing layer of solar cells are mostly lab creations. Many are lead halide perovskites, which combine a lead ion and three ions of iodine or a related element, such as bromine, with a third type of ion (usually something like methylammonium). Those ingredients link together to form perovskites’ hallmark cagelike pyramid-on-pyramid structure. Swapping out different ingredients (replacing lead with tin, for instance) can yield many kinds of perovskites, all with slightly different chemical properties but the same basic crystal structure.

Perovskites owe their solar skills to the way their electrons interact with light. When sunlight shines on a solar panel, photons — tiny packets of light energy — bombard the panel’s surface like a barrage of bullets and get absorbed. When a photon is absorbed into the solar cell, it can share some of its energy with a negatively charged electron. Electrons are attracted to the positively charged nucleus of an atom. But a photon can give an electron enough energy to escape that pull, much like a video game character getting a power-up to jump a motorbike across a ravine. As the energized electron leaps away, it leaves behind a positively charged hole. A separate layer of the solar cell collects the electrons, ferrying them off as electric current.

The amount of energy needed to kick an electron over the ravine is different for every material. And not all photon power-ups are created equal. Sunlight contains low-energy photons (infrared light) and high-energy photons (sunburn-causing ultraviolet radiation), as well as all of the visible light in between.

Photons with too little energy “will just sail right on through” the light-catching layer and never get absorbed, says Daniel Friedman, a photovoltaic researcher at the National Renewable Energy Lab. Only a photon that comes in with energy higher than the amount needed to power up an electron will get absorbed. But any excess energy a photon carries beyond what’s needed to boost up an electron gets lost as heat. The more heat lost, the more inefficient the cell.
Because the photons in sunlight vary so much in energy, no solar cell will ever be able to capture and optimally use every photon that comes its way. So you pick a material, like silicon, that’s a good compromise — one that catches a decent number of photons but doesn’t waste too much energy as heat, Friedman says.

Although it has dominated the solar cell industry, silicon can’t fully use the energy from higher-energy photons; the material’s solar conversion efficiency tops out at around 30 percent in theory and has hit 20-some percent in practice. Perovskites could do better. The electrons inside perovskite crystals require a bit more energy to dislodge. So when higher-energy photons come into the solar cell, they devote more of their energy to dislodging electrons and generating electric current, and waste less as heat. Plus, by changing the ingredients and their ratios in a perovskite, scientists can adjust the photons it catches. Using different types of perovskites across multiple layers could allow solar cells to more effectively absorb a broader range of photons.

Perovskites have a second efficiency perk. When a photon excites an electron inside a material and leaves behind a positively charged hole, there’s a tendency for the electron to slide right back into a hole. This recombination, as it’s known, is inefficient — an electron that could have fed an electric current instead just stays put.

In perovskites, though, excited electrons usually migrate quite far from their holes, Snaith and others have found by testing many varieties of the material. That boosts the chances the electrons will make it out of the perovskite layer without landing back in a hole.

“It’s a very rare property,” Miyasaka says. It makes for an efficient sunlight absorber.

Some properties of perovskites also make them easier than silicon to turn into solar cells. Making a conventional silicon solar cell requires many steps, all done in just the right order at just the right temperature — something like baking a fragile soufflé. The crystals of silicon have to be perfect, because even small defects in the material can hurt its efficiency. The need for such precision makes silicon solar cells more expensive to produce.

Perovskites are more like brownies from a box — simpler, less finicky. “You can make it in an office, basically,” says materials scientist Robert Chang of Northwestern University in Evanston, Ill. He’s exaggerating, but only a little. Perovskites are made by essentially mixing a bunch of ingredients together and depositing them on a surface in a thin, even film. And while making crystalline silicon requires temperatures up to 2000° Celsius, perovskite crystals form at easier-to-reach temperatures — lower than 200°.

Seeking stability
In many ways, perovskites have become even more promising solar cell materials over time, as scientists have uncovered exciting new properties and finessed the materials’ use. But no material is perfect. So now, scientists are searching for ways to overcome perovskites’ real-world limitations. The most pressing issue is their instability, van de Lagemaat says. The high efficiency levels reported from labs often last only days or hours before the materials break down.

Tackling stability is a less flashy problem than chasing efficiency records, van de Lagemaat points out, which is perhaps why it’s only now getting attention. Stability isn’t a single number that you can flaunt, like an efficiency value. It’s also a bit harder to define, especially since how long a solar cell lasts depends on environmental conditions like humidity and precipitation levels, which vary by location.

Encapsulating the cell with water-resistant coatings is one strategy, but some scientists want to bake stability into the material itself. To do that, they’re experimenting with different perovskite designs. For instance, solar cells containing stacks of flat, graphenelike sheets of perovskites seem to hold up better than solar cells with the standard three-dimensional crystal and its interwoven layers.

In these 2-D perovskites, some of the methylammonium ions are replaced by something larger, like butylammonium. Swapping in the bigger ion forces the crystal to form in sheets just nanometers thick, which stack on top of each other like pages in a book, says chemist Aditya Mohite of Los Alamos National Laboratory in New Mexico. The butylammonium ion, which naturally repels water, forms spacer layers between the 2-D sheets and stops water from permeating into the crystal.
Getting the 2-D layers to line up just right has proved tricky, Mohite says. But by precisely controlling the way the layers form, he and colleagues created a solar cell that runs at 12.5 percent efficiency while standing up to light and humidity longer than a similar 3-D model, the team reported in 2016 in Nature. Although it was protected with a layer of glass, the 3-D perovskite solar cell lost performance rapidly, within a few days, while the 2-D perovskite withered only slightly. (After three months, the 2-D version was still working almost as well as it had been at the beginning.)

Despite the seemingly complex structure of the 2-D perovskites, they are no more complicated to make than their 3-D counterparts, says Mercouri Kanatzidis, a chemist at Northwestern and a collaborator on the 2-D perovskite project. With the right ingredients, he says, “they form on their own.”

His goal now is to boost the efficiency of 2-D perovskite cells, which don’t yet match up to their 3-D counterparts. And he’s testing different water-repelling ions to reach an ideal stability without sacrificing efficiency.

Other scientists have mixed 2-D and 3-D perovskites to create an ultra-long-lasting cell — at least by perovskite standards. A solar panel made of these cells ran at only 11 percent efficiency, but held up for 10,000 hours of illumination, or more than a year, according to research published in June in Nature Communications. And, importantly, that efficiency was maintained over an area of about 50 square centimeters, more on par with real-world conditions than the teeny-tiny cells made in most research labs.

A place for perovskites?
With boosts to their stability, perovskite solar cells are getting closer to commercial reality. And scientists are assessing where the light-capturing material might actually make its mark.

Some fans have pitted perovskites head-to-head with silicon, suggesting the newbie could one day replace the time-tested material. But a total takeover probably isn’t a realistic goal, says Sarah Kurtz, codirector of the National Center for Photovoltaics at the National Renewable Energy Lab.

“People have been saying for decades that silicon can’t get lower in cost to meet our needs,” Kurtz says. But, she points out, the price of solar energy from silicon-based panels has dropped far lower than people originally expected. There are a lot of silicon solar panels out there, and a lot of commercial manufacturing plants already set up to deal with silicon. That’s a barrier to a new technology, no matter how great it is. Other silicon alternatives face the same limitation. “Historically, silicon has always been dominant,” Kurtz says.
For Snaith, that’s not a problem. He cofounded Oxford Photo-voltaics Limited, one of the first companies trying to commercialize perovskite solar cells. His team is developing a solar cell with a perovskite layer over a standard silicon cell to make a super-efficient double-decker cell. That way, Snaith says, the team can capitalize on the massive amount of machinery already set up to build commercial silicon solar cells.
A perovskite layer on top of silicon would absorb higher-energy photons and turn them into electricity. Lower-energy photons that couldn’t excite the perovskite’s electrons would pass through to the silicon layer, where they could still generate current. By combining multiple materials in this way, it’s possible to catch more photons, making a more efficient cell.

That idea isn’t new, Snaith points out: For years, scientists have been layering various solar cell materials in this way. But these double-decker cells have traditionally been expensive and complicated to make, limiting their applications. Perovskites’ ease of fabrication could change the game. Snaith’s team is seeing some improvement already, bumping the efficiency of a silicon solar cell from 10 to 23.6 percent by adding a perovskite layer, for example. The team reported that result online in February in Nature Energy.

Rather than compete with silicon solar panels for space on sunny rooftops and in open fields, perovskites could also bring solar energy to totally new venues.

“I don’t think it’s smart for perovskites to compete with silicon,” Miyasaka says. Perovskites excel in other areas. “There’s a whole world of applications where silicon can’t be applied.”

Silicon solar cells don’t work as well on rainy or cloudy days, or indoors, where light is less direct, he says. Perovskites shine in these situations. And while traditional silicon solar cells are opaque, very thin films of perovskites could be printed onto glass to make sunlight-capturing windows. That could be a way to bring solar power to new places, turning glassy skyscrapers into serious power sources, for example. Perovskites could even be printed on flexible plastics to make solar-powered coatings that charge cell phones.

That printing process is getting closer to reality: Scientists at the University of Toronto recently reported a way to make all layers of a perovskite solar cell at temperatures below 150° — including the light-absorbing perovskite layer, but also the background workhorse layers that carry the electrons away and funnel them into current. That could streamline and simplify the production process, making mass newspaper-style printing of perovskite solar cells more doable.

Printing perovskite solar cells on glass is also an area of interest for Oxford Photovoltaics, Snaith says. The company’s ultimate target is to build a perovskite cell that will last 25 years, as long as a traditional silicon cell.

From day one, a frog’s developing brain is calling the shots

Frog brains get busy long before they’re fully formed. Just a day after fertilization, embryonic brains begin sending signals to far-off places in the body, helping oversee the layout of complex patterns of muscles and nerve fibers. And when the brain is missing, bodily chaos ensues, researchers report online September 25 in Nature Communications.

The results, from brainless embryos and tadpoles, broaden scientists’ understanding of the types of signals involved in making sure bodies develop correctly, says developmental biologist Catherine McCusker of the University of Massachusetts Boston. Scientists are familiar with short-range signals among nearby cells that help pattern bodies. But because these newly described missives travel all the way from the brain to the far reaches of the body, they are “the first example of really long-range signals,” she says.
Celia Herrera-Rincon of Tufts University in Medford, Mass., and colleagues came up with a simple approach to tease out the brain’s influence on the growing body. Just one day after fertilization, the scientists lopped off the still-forming brains of African clawed frog embryos. These embryos survive to become tadpoles even without brains, a quirk of biology that allowed the researchers to see whether the brain is required for the body’s development.
The answer was a definite — and surprising — yes, Herrera-Rincon says. Long before the brain is mature, it’s already organizing and guiding organ behavior, she says. Brainless tadpoles had bungled patterns of muscles. Normally, muscle fibers form a stacked chevron pattern. But in tadpoles lacking a brain, this pattern didn’t form correctly. “The borders between segments are all wonky,” says study coauthor Michael Levin, also of Tufts University. “They can’t keep a straight line.”
Nerve fibers that crisscross tadpoles’ bodies also grew in an abnormal pattern. Levin and colleagues noticed extra nerve fibers snaking across the brainless tadpoles in a chaotic pattern, “a nerve network that shouldn’t be there,” he says.

Muscle and nerve abnormalities are the most obvious differences. But brainless tadpoles probably have more subtle defects in other parts of their bodies, such as the heart. The search for those defects is the subject of ongoing experiments, Levin says.
In addition to keeping patterns on point, the young frog brain may protect its body from chemical assaults. A molecule that binds to certain proteins on cells in the body had no effect on normal embryos. But when given to brainless embryos, the same molecule caused their spinal cords and tails to grow crooked. These results suggest that early in development, brains keep embryos safe from agents that would otherwise cause harm.

“The brain is instructing cells that are really a long way away from it,” Levin says. While the precise identities of these long-range signals aren’t known, the researchers have some ideas. When brainless embryos were dosed with a drug that targets cells that typically respond to the chemical messenger acetylcholine, the muscle pattern improved. Similarly, the addition of a protein called HCN2 that can tweak the activity of cells also seemed to improve muscle development. More work is needed before scientists know whether these interventions are actually mimicking messaging from the early brain, and if so, how.

Frog development isn’t the same as mammalian development, but frog development “is pretty applicable to human biology,” McCusker says. In fundamental ways, humans and frogs are built from the same molecular toolbox, she says. So the results hint that a growing human brain might also interact similarly with a growing human body.

Here’s what really happened to Hanny’s Voorwerp

The weird glowing blob of gas known as Hanny’s Voorwerp was a 10-year-old mystery. Now, Lia Sartori of ETH Zurich and colleagues have come to a two-pronged solution.

Hanny van Arkel, then a teacher in the Netherlands, discovered the strange bluish-green voorwerp, Dutch for “object,” in 2008 as she was categorizing pictures of galaxies as part of the Galaxy Zoo citizen science project.

Further observations showed that the voorwerp was a glowing cloud of gas that stretched some 100,000 light-years from the core of a massive nearby galaxy called IC 2497. The glow came from radiation emitted by an actively feeding black hole in the galaxy.
To excite the voorwerp’s glow, the black hole and its surrounding accretion disk, the active galactic nucleus, or AGN, should have had the brightness of about 2.5 trillion suns; its radio emission, however, suggested the AGN emitted the equivalent of a relatively paltry 25,000 suns. Either the AGN was obscured by dust, or the black hole slowed its eating around 100,000 years ago, causing its brightness to plunge.

Sartori and colleagues made the first direct measurement of the AGN’s intrinsic brightness using NASA’s NuSTAR telescope, which observed IC 2497 in high-energy X-rays that cut through the dust.

They found that the AGN is obscured by dust and it is dimmer than expected; the feeding has slowed way down. The team reported on arXiv.org on November 20 that IC 2497’s heart is as bright as 50 billion to 100 billion suns, meaning it dropped in brightness by a factor of 50 in the past 100,000 years — a less dramatic drop than previously thought.
“Both hypotheses that we thought before are true,” Sartori says.

Sartori plans to analyze NuSTAR observations of other voorwerpjes to see if their galaxies’ black holes are also in the process of shutting down — or even booting up.

“If you look at these clouds, you get information on how the black hole was in the past,” she says. “So we have a way to study how the activity of supermassive black holes varies on superhuman time scales.”

Editor’s note: This story was updated December 5, 2017, to clarify that the brightness measured by the researchers came from the accretion disk around an actively eating black hole, not the black hole itself.

Pollinators are usually safe from a Venus flytrap

Out of the hundreds of species of carnivorous plants found across the planet, none attract quite as much fascination as the Venus flytrap. The plants are native to just a small section of North Carolina and South Carolina, but these tiny plants can now be found around the world. They’re a favorite among gardeners, who grow them in homes and greenhouses.

Scientists, too, have long been intrigued by the plants and have extensively studied the famous trap. But far less is known about the flower that blooms on a stalk 15 to 35 centimeters above — including what pollinates that flower.
“The rest of the plant is so incredibly cool that most folks don’t get past looking at the active trap leaves,” says Clyde Sorenson, an entomologist at North Carolina State University in Raleigh. Plus, notes Sorenson’s NCSU colleague Elsa Youngsteadt, an insect ecologist, because flytraps are native to just a small part of North and South Carolina, field studies can be difficult. And most people who raise flytraps cut off the flowers so the plant can put more energy into making traps.

Sorenson and Youngsteadt realized that the mystery of flytrap pollination was sitting almost literally in their backyard. So they and their colleagues set out to solve it. They collected flytrap flower visitors and prey from three sites in Pender County, North Carolina, on four days in May and June 2016, being careful not to damage the plants.

“This is one of the prettiest places where you could work,” Youngsteadt says. Venus flytraps are habitat specialists, found only in certain spots of longleaf pine savannas in the Carolinas. “They need plenty of sunlight but like their feet to be wet,” says Sorenson. In May and June, the spots of savanna where the flytraps grow are “just delightful,” he says. And other carnivorous plants can be found there, too, including pitcher plants and sundews.
The researchers brought their finds back to the lab for identification. They also cataloged what kind of pollen was on flower visitors, and how much.
Nearly 100 species of arthropods visited the flowers, the team reports February 5 in American Naturalist. “The diversity of visitors on those flowers was surprising,” says Youngsteadt. However, only three species — a sweat bee and two beetles — appeared to be the most important, as they were either the most frequent visitors or carriers of the most pollen.
The study also found little overlap between pollinators and prey. Only 13 species were found both in a trap and on a flower, and of the nine potential pollinators in that group, none were found in high numbers.

For a carnivorous plant, “you don’t want to eat your pollinators,” Sorenson says. Flytraps appear to be doing a good job at that.

There are three ways that a plant can keep those groups separate, the researchers note. Flowers and traps could exist at different times of the year. However, that’s not the case with Venus flytraps. The plants produce the two structures at separate times, but traps stick around and are active during plant flowering.

Another possibility is the spatial separation of the two structures. Pollinators tend to be fliers while prey were more often crawling arthropods, such as spiders and ants. This matches up with the high flowers and low traps. But the researchers would like to do some experiments that manipulate the heights of the structures to see just how much that separation matters, Youngsteadt says.

The third option is that different scents or colors produced by flowers and traps might lure in different species to each structure. That’s another area for future study, Youngsteadt says. While attraction to scent and color are well documented for traps, little is now known about those factors for the flowers.

Venus flytraps are considered vulnerable to extinction, threatened by humans, Sorenson notes. The plant’s habitat is being destroyed as the population of the Carolinas grows. What is left of the habitat is being degraded as fires are suppressed (fires help clear vegetation and keep sunlight shining on the flytraps). And people steal flytraps from the wild by the thousands.

While research into their pollinators won’t help with any of those threats, it could aid in future conservation efforts. “Anything we can do to better understand how this plant reproduces will be of use down the road,” Sorenson says.

But what really excites the scientists is that they discovered something new so close to home. “One of the most thrilling parts of all this,” Sorenson says, “is that this plant has been known to science for [so long], everyone knows it, but there’s still a whole lot of things to discover.”

The Neil Armstrong biopic ‘First Man’ captures early spaceflight’s terror

First Man is not a movie about the moon landing.

The Neil Armstrong biopic, opening October 12, follows about eight years of the life of the first man on the moon, and spends about eight minutes depicting the lunar surface. Instead of the triumphant ticker tape parades that characterize many movies about the space race, First Man focuses on the terror, grief and heartache that led to that one small step.

“It’s a very different movie and storyline than people expect,” says James Hansen, author of the 2005 biography of Armstrong that shares the film’s name and a consultant on the film.
The story opens shortly before Armstrong’s 2-year-old daughter, Karen, died of a brain tumor in January 1962. That loss hangs over the rest of the film, setting the movie’s surprisingly somber emotional tone. The cinematography is darker than most space movies. Colors are muted. Music is ominous or absent — a lot of scenes include only ambient sound, like a pen scratching on paper, a glass breaking or a phone clicking into the receiver.
Karen’s death also seems to motivate the rest of Armstrong’s journey. Getting a fresh start may have been part of the reason why the grieving Armstrong (portrayed by Ryan Gosling) applied to the NASA Gemini astronaut program, although he never explicitly says so. And without giving too much away, a private moment Armstrong takes at the edge of Little West crater on the moon recalls his enduring bond with his daughter.

Hansen’s book also makes the case that Karen’s death motivated Armstrong’s astronaut career. Armstrong’s oldest son, Rick, who was 12 when his father landed on the moon, agrees that it’s plausible. “But it’s not something that he ever really definitively talked about,” Rick Armstrong says.

Armstrong’s reticence about Karen — and almost everything else — is true to life. That’s not all the film got right. Gosling captured Armstrong’s gravitas as well as his humor, and Claire Foy as his wife, Janet Armstrong, “is just amazing,” Rick Armstrong says.

Beyond the performances, the filmmakers, including director Damien Chazelle and screenwriter Josh Singer, went to great lengths to make the technical aspects of spaceflight historically accurate. The Gemini and Apollo cockpits Gosling sits in are replicas of the real spacecraft, and he flipped switches and hit buttons that would have controlled real flight. Much of the dialog during space scenes was taken verbatim from NASA’s control room logs, Hansen says.

The result is a visceral sense of how frightening and risky those early flights were. The spacecraft rattled and creaked like they were about to fall apart. The scene of Armstrong’s flight on the 1966 Gemini 8 mission, which ended early when the spacecraft started spinning out of control and almost killed its passengers, is terrifying. The 1967 fire inside the Apollo 1 spacecraft, which killed astronauts Ed White, Gus Grissom and Roger Chaffee, is gruesome.

“We wanted to treat that one with extreme care and love and get it exactly right,” Hansen says. “What we have in that scene, none of it’s made up.”

Even when the filmmakers took poetic license, they did it in a historical way. A vomit-inducing gyroscope that Gosling rides in during Gemini astronaut training was, in real life, used for the earlier Mercury astronauts, but not for Gemini, for instance. Since the Mercury astronauts never experienced the kind of dizzying rotation that the gyroscope mimicked, NASA dismantled it before the next group of astronauts arrived.

“They probably shouldn’t have dismantled it,” Hansen says — it did simulate what ended up happening in the Gemini 8 accident. So the filmmakers used the gyroscope experience as foreshadowing.

Meanwhile, present-day astronauts are not immune to harrowing brushes with death: a Russian Soyuz capsule carrying two astronauts malfunctioned October 11, and the astronauts had to evacuate in an alarming “ballistic descent.” NASA is currently talking about when and how to send astronauts back to the moon from American soil. The first commercial crew astronauts, who will test spacecraft built by Boeing and SpaceX, were announced in August.

First Man is a timely and sobering reminder of the risks involved in taking these giant leaps.

Loneliness is bad for brains

SAN DIEGO — Mice yanked out of their community and held in solitary isolation show signs of brain damage.

After a month of being alone, the mice had smaller nerve cells in certain parts of the brain. Other brain changes followed, scientists reported at a news briefing November 4 at the annual meeting of the Society for Neuroscience.

It’s not known whether similar damage happens in the brains of isolated humans. If so, the results have implications for the health of people who spend much of their time alone, including the estimated tens of thousands of inmates in solitary confinement in the United States and elderly people in institutionalized care facilities.

The new results, along with other recent brain studies, clearly show that for social species, isolation is damaging, says neurobiologist Huda Akil of the University of Michigan in Ann Arbor. “There is no question that this is changing the basic architecture of the brain,” Akil says.
Neurobiologist Richard Smeyne of Thomas Jefferson University in Philadelphia and his colleagues raised communities of multiple generations of mice in large enclosures packed with toys, mazes and things to climb. When some of the animals reached adulthood, they were taken out and put individually into “a typical shoebox cage,” Smeyne said.

This abrupt switch from a complex society to isolation induced changes in the brain, Smeyne and his colleagues later found. The overall size of nerve cells, or neurons, shrunk by about 20 percent after a month of isolation. That shrinkage held roughly steady over three months as mice remained in isolation.
To the researchers’ surprise, after a month of isolation, the mice’s neurons had a higher density of spines — structures for making neural connections — on message-receiving dendrites. An increase in spines is a change that usually signals something positive. “It’s almost as though the brain is trying to save itself,” Smeyne said.

But by three months, the density of dendritic spines had decreased back to baseline levels, perhaps a sign that the brain couldn’t save itself when faced with continued isolation. “It’s tried to recover, it can’t, and we start to see these problems,” Smeyne said.

The researchers uncovered other worrisome signals, too, including reductions in a protein called BDNF, which spurs neural growth. Levels of the stress hormone cortisol changed, too. Compared with mice housed in groups, isolated mice also had more broken DNA in their neurons.

The researchers studied neurons in the sensory cortex, a brain area involved in taking in information, and the motor cortex, which helps control movement. It’s not known whether similar effects happen in other brain areas, Smeyne says.

It’s also not known how the neural changes relate to mice’s behavior. In people, long-term isolation can lead to depression, anxiety and psychosis. Brainpower is affected, too. Isolated people develop problems reasoning, remembering and navigating.

Smeyne is conducting longer-term studies aimed at figuring out the effects of neuron shrinkage on thinking skills and behavior. He and his colleagues also plan to return isolated mice to their groups to see if the brain changes can be reversed. Those types of studies get at an important issue, Akil says. “The question is, ‘When is it too far gone?’”

How locust ecology inspired an opera

Locust: The Opera finds a novel way to doom a soprano: species extinction.

The libretto, written by entomologist Jeff Lockwood of the University of Wyoming in Laramie, features a scientist, a rancher and a dead insect. The scientist tenor agonizes over why the Rocky Mountain locust went extinct at the dawn of the 20th century. He comes up with hypotheses, three of which unravel to music and frustration.

The project hatched in 2014. “Jeff got in his head, ‘Oh, opera is a good way to tell science stories,’ which takes a creative mind to think that,” says Anne Guzzo, who composed the music. Guzzo teaches music theory and composition at the University of Wyoming.
locust brought famine and ruin to farms across the western United States. “This was a devastating pest that caused enormous human suffering,” Lockwood says. Epic swarms would suddenly descend on and eat vast swaths of cropland. “On the other hand, it was an iconic species that defined and shaped the continent.” Lockwood had written about the locust’s mysterious and sudden extinction in the 2004 book Locust , but the topic “begged in my mind for the grandeur of opera.” He spent several years mulling how to create a one-hour opera for three singers about the swarming grasshopper species.
Then the ghost of Hamlet’s father, in the opera “Amleto,” based on Shakespeare’s play, inspired a breakthrough. Lockwood imagined a spectral soprano locust, who haunted a scientist until he figured out what killed her kind.

To make one locust soprano represent trillions, Guzzo challenged her music theory class to find ways of evoking the sound of a swarm. They tried snapping fingers, rattling cardstock and crinkling cellophane. But “the simplest answer was the most elegant,” Guzzo says — tasking the audience with shivering sheets of tissue paper in sequence, so that a great wave of rustling swept through the auditorium.

For the libretto, Lockwood took an unusually data-driven approach. After surveying opera lengths and word counts, he paced his work at 25 to 30 words per minute, policing himself sternly. If a scene was long by two words, he’d find two to cut.
He wrote the dialogue not in verse, but as conversation, some of it a bit professorial. Guzzo asked for a few line changes. “I just couldn’t get ‘manic expressions of fecundity’ to fit where I wanted it to,” she says.
Eventually, the scientist solves the mystery, but takes no joy in telling the beautiful locust ghost that humans had unwittingly doomed her kind by destroying vital locust habitat. For tragedy, Lockwood says, “there has to be a loss tinged with a kind of remorse.”

The opera, performed twice in Jackson, Wyo., will next be staged in March in Agadir, Morocco.

A gut-brain link for Parkinson’s gets a closer look

Martha Carlin married the love of her life in 1995. She and John Carlin had dated briefly in college in Kentucky, then lost touch until a chance meeting years later at a Dallas pub. They wed soon after and had two children. John worked as an entrepreneur and stay-at-home dad. In his free time, he ran marathons.

Almost eight years into their marriage, the pinky finger on John’s right hand began to quiver. So did his tongue. Most disturbing for Martha was how he looked at her. For as long as she’d known him, he’d had a joy in his eyes. But then, she says, he had a stony stare, “like he was looking through me.” In November 2002, a doctor diagnosed John with Parkinson’s disease. He was 44 years old.

Carlin made it her mission to understand how her seemingly fit husband had developed such a debilitating disease. “The minute we got home from the neurologist, I was on the internet looking for answers,” she recalls. She began consuming all of the medical literature she could find.

With her training in accounting and corporate consulting, Carlin was used to thinking about how the many parts of large companies came together as a whole. That kind of wide-angle perspective made her skeptical that Parkinson’s, which affects half a million people in the United States, was just a malfunction in the brain.Martha Carlin married the love of her life in 1995. She and John Carlin had dated briefly in college in Kentucky, then lost touch until a chance meeting years later at a Dallas pub. They wed soon after and had two children. John worked as an entrepreneur and stay-at-home dad. In his free time, he ran marathons.

Almost eight years into their marriage, the pinky finger on John’s right hand began to quiver. So did his tongue. Most disturbing for Martha was how he looked at her. For as long as she’d known him, he’d had a joy in his eyes. But then, she says, he had a stony stare, “like he was looking through me.” In November 2002, a doctor diagnosed John with Parkinson’s disease. He was 44 years old.

Carlin made it her mission to understand how her seemingly fit husband had developed such a debilitating disease. “The minute we got home from the neurologist, I was on the internet looking for answers,” she recalls. She began consuming all of the medical literature she could find.

With her training in accounting and corporate consulting, Carlin was used to thinking about how the many parts of large companies came together as a whole. That kind of wide-angle perspective made her skeptical that Parkinson’s, which affects half a million people in the United States, was just a malfunction in the brain.
“I had an initial hunch that food and food quality was part of the issue,” she says. If something in the environment triggered Parkinson’s, as some theories suggest, it made sense to her that the disease would involve the digestive system. Every time we eat and drink, our insides encounter the outside world.

John’s disease progressed slowly and Carlin kept up her research. In 2015, she found a paper titled, “Gut microbiota are related to Parkinson’s disease and clinical phenotype.” The study, by neurologist Filip Scheperjans of the University of Helsinki, asked two simple questions: Are the microorganisms that populate the guts of Parkinson’s patients different than those of healthy people? And if so, does that difference correlate with the stooped posture and difficulty walking that people with the disorder experience? Scheperjans’ answer to both questions was yes.

Carlin had picked up on a thread from one of the newest areas of Parkinson’s research: the relationship between Parkinson’s and the gut. Other than a small fraction of cases that are inherited, the cause of Parkinson’s disease is unknown. What is known is that something kills certain nerve cells, or neurons, in the brain. Abnormally misfolded and clumped proteins are the prime suspect. Some theories suggest a possible role for head trauma or exposure to heavy metals, pesticides or air pollution.
People with Parkinson’s often have digestive issues, such as constipation, long before the disease appears. Since the early 2000s, scientists have been gathering evidence that the malformed proteins in the brains of Parkinson’s patients might actually first appear in the gut or nose (people with Parkinson’s also commonly lose their sense of smell).
From there, the theory goes, these proteins work their way into the nervous system. Scientists don’t know exactly where in the gut the misfolded proteins come from, or why they form, but some early evidence points to the body’s internal microbial ecosystem. In the latest salvo, scientists from Sweden reported in October that people who had their appendix removed had a lower risk of Parkinson’s years later (SN: 11/24/18, p. 7). The job of the appendix, which is attached to the colon, is a bit of a mystery. But the organ may play an important role in intestinal health.

If the gut connection theory proves true — still a big if — it could open up new avenues to one day treat or at least slow the disease.

“It really changes the concept of what we consider Parkinson’s,” Scheperjans says. Maybe Parkinson’s isn’t a brain disease that affects the gut. Perhaps, for many people, it’s a gut disease that affects the brain.

Gut feeling
London physician James Parkinson wrote “An essay on the shaking palsy” in 1817, describing six patients with unexplained tremors. Some also had digestive problems. (“Action of the bowels had been very much retarded,” he reported of one man.) He treated two people with calomel — a toxic, mercury-based laxative of the time — and noted that their tremors subsided.

But the digestive idiosyncrasies of the disease that later bore Parkinson’s name largely faded into the background for the next two centuries, until neuroanatomists Heiko Braak and Kelly Del Tredici, now at the University of Ulm in Germany, proposed that Parkinson’s disease might arise from the intestine. Writing in Neurobiology of Aging in 2003, they and their colleagues based their theory on autopsies of Parkinson’s patients.
The researchers were looking for Lewy bodies, which contain clumps of a protein called alpha-synuclein. The presence of Lewy bodies in the brain is a hallmark of Parkinson’s, though their exact role in the disease is still under investigation.

Lewy bodies form when alpha-synuclein, which is produced by neurons and other cells, starts curdling into unusual strands. The body encapsulates the abnormal alpha-synuclein and other proteins into the round Lewy body bundles. In the brain, Lewy bodies collect in the cells of the substantia nigra, a structure that helps orchestrate movement. By the time symptoms appear, much of the substantia nigra is already damaged.

Substantia nigra cells produce the chemical dopamine, which is important for movement. Levodopa, the main drug prescribed for Parkinson’s, is a synthetic replacement for dopamine. The drug has been around for a half-century, and while it can alleviate symptoms for a while, it does not slow the destruction of brain cells.

In patient autopsies, Braak and his team tested for the presence of Lewy bodies, as well as abnormal alpha-s­ynuclein that had not yet become bundled together. Based on comparisons with people without Parkinson’s, the researchers found signs that Lewy bodies start to form in the nasal passages and intestine before they show up in the brain. Braak’s group proposed that Parkinson’s disease develops in stages, migrating from the gut and nose into the nerves to reach the brain.

Neural highway
Today, the idea that Parkinson’s might arise from the intestine, not the brain, “is one of the most exciting things in Parkinson’s disease,” says Heinz Reichmann, a neurologist at the University of Dresden in Germany. The Braak theory couldn’t explain how the Lewy bodies reach the brain, but Braak speculated that some sort of pathogen, perhaps a virus, might travel along the body’s nervous system, leaving a trail of Lewy bodies.

There is no shortage of passageways: The intestine contains so many nerves that it’s sometimes called the body’s second brain. And the vagus nerve offers a direct connection between those nerves in the gut and the brain (SN: 11/28/15, p. 18).

In mice, alpha-synuclein can indeed migrate from the intestine to the brain, using the vagus nerve like a kind of intercontinental highway, as Caltech researchers demonstrated in 2016 (SN: 12/10/16, p. 12). And Reichmann’s experiments have shown that mice that eat the pesticide rotenone develop symptoms of Parkinson’s. Other teams have shown similar reactions in mice that inhale the chemical. “What you sniff, you swallow,” he says.

To look at this idea another way, researchers have examined what happens to Parkinson’s risk when people have a weak or missing vagus nerve connection. There was a time when doctors thought that an overly eager vagus nerve had something to do with stomach ulcers. Starting around the 1970s, many patients had the nerve clipped as an experimental means of treatment, a procedure called a vagotomy. In one of the latest studies on vagotomy and Parkinson’s, researchers examined more than 9,000 patients with vagotomies, using data from a nationwide patient registry in Sweden. Among people who had the nerve cut down low, just above the stomach, the risk of Parkinson’s began dropping five years after surgery, eventually reaching a difference of about 50 percent compared with people who hadn’t had a vagotomy, the researchers reported in 2017 in Neurology.
The studies are suggestive, but by no means definitive. And the vagus nerve may not be the only possible link the gut and brain share. The body’s immune system might also connect the two, as one study published in January in Science Translational Medicine found. Study leader Inga Peter, a genetic epidemiologist at the Icahn School of Medicine at Mount Sinai in New York City, was looking for genetic contributors to Crohn’s disease, an inflammatory bowel condition that affects close to 1 million people in the United States.

She and a worldwide team studied about 2,000 people from an Ashkenazi Jewish population, which has an elevated risk of Crohn’s, and compared them with people without the disease. The research led Peter and colleagues to suspect the role of a gene called LRRK2. That gene is involved in the immune system — which mistakenly attacks the intestine in people who have Crohn’s. So it made sense for a variant of that gene to be involved in inflammatory disease. The researchers were thrown, however, when they discovered that versions of the gene also appeared to increase the risk for Parkinson’s disease.

“We refused to believe it,” Peter says. The finding, although just a correlation, suggested that whatever the gene was doing to the intestine might have something to do with Parkinson’s. So the team investigated the link further, reporting results in the August JAMA Neurology.

In their analysis of a large database of health insurance claims and prescriptions, the scientists found more evidence of inflammation’s role. People with inflammatory bowel disease were about 30 percent more likely to develop Parkinson’s than people without it. But among those who had filled prescriptions for an anti-inflammatory medication called antitumor necrosis factor, which the researchers used as a marker for reduced inflammation, Parkinson’s risk was 78 percent lower than in people who had not filled prescriptions for the drug.

Belly bacteria
Like Inga Peter, microbiologist Sarkis Mazmanian of Caltech came upon Parkinson’s disease almost by accident. He had long studied how the body’s internal bacteria interact with the immune system. At lunch one day with a colleague who was studying autism using a mouse version of the disease, Mazmanian asked if he could take a look at the animals’ intestines. Because of the high density of nerves in the intestine, he wanted to see if the brain and gut were connected in autism.

Neurons in the gut “are literally one cell layer away from the microbes,” he says. “That made me feel that at least the physical path or conduit was there.” He began to study autism, but wanted to switch to a brain disease with more obvious physical symptoms. When he learned that people with Parkinson’s disease often have a long history of digestive problems, he had his subject.

Mazmanian’s group examined mice that were genetically engineered to overproduce alpha-synuclein. He wanted to know whether the presence or absence of gut bacteria influenced symptoms that developed in the mice.

The results, reported in Cell in 2016, showed that when the mice were raised germ free — meaning their insides had no microorganisms — they showed no signs of Parkinson’s. The animals had no telltale gait or balance problems and no constipation, even though their bodies made alpha-synuclein (SN: 12/24/16 & 1/7/17, p. 10). “All the features of Parkinson’s in the animals were gone when the animals had no microbiome,” he says.

However, when gut microbes from people diagnosed with Parkinson’s were transplanted into the germ-free mice, the mice developed symptoms of the disease — symptoms that were much more severe than those in mice transplanted with microbes from healthy people.

Mazmanian suspects that something in the microbiome triggers the misfolding of alpha-synuclein. But this has not been tested in humans, and he is quick to say that this is just one possible explanation for the disease. “There’s likely no one smoking gun,” he says.

Microbial forces
If the microbiome is involved, what exactly is it doing to promote Parkinson’s? Microbiologist Matthew Chapman of the University of Michigan in Ann Arbor thinks it may have something to do with chemical signals that bacteria send to the body. Chapman studies biofilms, which occur when bacteria form resilient colonies. (Think of the slime on the inside a drain pipe.)

Part of what makes biofilms so hard to break apart is that fibers called amyloids run through them. Amyloids are tight stacks of proteins, like columns of Legos. Scientists have long suspected that amyloids are involved in degenerative diseases of the brain, including Alzheimer’s. In Parkinson’s, amyloid forms of alpha-synuclein are found in Lewy bodies.

Despite amyloids’ bad reputation, the fibers themselves aren’t always undesirable, Chapman says. Sometimes they may provide a good way of storing proteins for future use, to be snapped off brick by brick as needed. Perhaps it’s only when amyloids form in the wrong place, like the brain, that they contribute to disease. Chapman’s lab group has found that E. coli bacteria, part of the body’s normal microbial population, produce amyloid forms of some proteins when they are under stress.

When gut bacteria produce amyloids, the body’s own cells could also be affected, wrote Chapman in 2017 in PLOS Pathogens with an unlikely partner: neurologist Robert Friedland of the University of Louisville School of Medicine in Kentucky. “This is a difficult field to study because it’s on the border of several fields,” Friedland says. “I’m a neurologist who has little experience in gastro­enterology. When I talked about this to my colleagues who are gastroenterologists, they’ve never heard that bacteria make amyloid.”
Friedland and collaborators reported in 2016 in Scientific Reports that when E. coli in the intestines of rats started to produce amyloid, alpha-synuclein in the rats’ brains also congealed into the amyloid form. In their 2017 paper, Chapman and Friedland suggested that the immune system’s reaction to the amyloid in the gut might have something to do with triggering amyloid formation in the brain.

In other words, when gut bacteria get stressed and start to produce their own amyloids, those microbes may be sending cues to nearby neurons in the intestine to follow suit. “The question is, and it’s still an outstanding question, what is it that these bacteria are producing that is, at least in animals, causing alpha-synuclein to form amyloids?” Chapman says.

Head for a cure
There is, in fact, a long list of questions about the microbiome, says Scheperjans, the neurologist whose paper Martha Carlin first spotted. So far, studies of the microbiomes of human patients are largely limited to simple observations like his, and the potential for a microbiome connection has yet to reach deeply into the neurology community. But in O­ctober, for the second year in a row, Scheperjans says, the International Congress of Parkinson’s Disease and Movement Disorders held a panel discussing connections to the microbiome.

“I got interested in the gastrointestinal aspects because the patients complained so much about it,” he says. While his study found definite differences in the bacteria of people with Parkinson’s, it’s still too early to know how that might matter. But Scheperjans hopes that one day doctors may be able to test for microbiome changes that put people at higher risk for Parkinson’s, and restore a healthy microbe population through diet or some other means to delay or prevent the disease.
One way to slow the disease might be shutting down the mobility of misfolded alpha-synuclein before it has even reached the brain. In Science in 2016, neuroscientist Valina Dawson and colleagues at Johns Hopkins University School of Medicine and elsewhere described using an antibody to halt the spread of bad alpha-synuclein from cell to cell. The researchers are working now to develop a drug that could do the same thing.

The goal is to one day test for the early development of Parkinson’s and then be able to tell a patient, “Take this drug and we’re going to try to slow and prevent progression of disease,” she says.

For her part, Carlin is doing what she can to speed research into connections between the microbiome and Parkinson’s. She quit her job, sold her house and drained her retirement account to pour money into the cause. She donated to the University of Chicago to study her husband’s microbiome. And she founded a company called the BioCollective to aid in microbiome research, providing free collection kits to people with Parkinson’s. The 15,000 microbiome samples she has collected so far are available to researchers.

Carlin admits that the possibility of a gut connection to Parkinson’s can be a hard sell. “It’s a difficult concept for people to wrap their head around when you are taking a broad view,” she says. As she searches for answers, her husband, John, keeps going. “He drives, he runs biking programs in Denver for people with Parkinson’s,” she says. Anything to keep the wheels turning toward the future.One way to slow the disease might be shutting down the mobility of misfolded alpha-synuclein before it has even reached the brain. In Science in 2016, neuroscientist Valina Dawson and colleagues at Johns Hopkins University School of Medicine and elsewhere described using an antibody to halt the spread of bad alpha-synuclein from cell to cell. The researchers are working now to develop a drug that could do the same thing.

The goal is to one day test for the early development of Parkinson’s and then be able to tell a patient, “Take this drug and we’re going to try to slow and prevent progression of disease,” she says.

For her part, Carlin is doing what she can to speed research into connections between the microbiome and Parkinson’s. She quit her job, sold her house and drained her retirement account to pour money into the cause. She donated to the University of Chicago to study her husband’s microbiome. And she founded a company called the BioCollective to aid in microbiome research, providing free collection kits to people with Parkinson’s. The 15,000 microbiome samples she has collected so far are available to researchers.

Carlin admits that the possibility of a gut connection to Parkinson’s can be a hard sell. “It’s a difficult concept for people to wrap their head around when you are taking a broad view,” she says. As she searches for answers, her husband, John, keeps going. “He drives, he runs biking programs in Denver for people with Parkinson’s,” she says. Anything to keep the wheels turning toward the future.

Why experts recommend ditching racial labels in genetic studies

Race should no longer be used to describe populations in most genetics studies, a panel of experts says.

Using race and ethnicity to describe study participants gives the mistaken impression that humans can be divided into distinct groups. Such labels have been used to stigmatize groups of people, but do not explain biological and genetic diversity, the panel convened by the U.S. National Academies of Sciences, Engineering and Medicine said in a report on March 14.
In particular, the term Caucasian should no longer be used, the committee recommends. The term, coined in the 18th century by German scientist Johann Friedrich Blumenbach to describe what he determined was the most beautiful skull in his collection, carries the false notion of white superiority, the panel says.

Worse, the moniker “has also acquired today the connotation of being an objective scientific term, and that’s what really led the committee to take objection with it,” says Ann Morning, a sociologist at New York University and a member of the committee that wrote the report. “It tends to reinforce this erroneous belief that racial categories are somehow objective and natural characterizations of human biological difference. We felt that it was a term that … should go into the dustbin of history.”

Similarly, the term “black race” shouldn’t be used because it implies that Black people are a distinct group, or race, that can be objectively defined, the panel says.

Racial definitions are problematic “because not only are they stigmatizing, they are historically wrong,” says Ambroise Wonkam, a medical geneticist at Johns Hopkins University and president of the African Society of Human Genetics. Race is often used as a proxy for genetic diversity. But “race cannot be used to capture diversity at all. Race doesn’t exist. There is only one race, the human race,” says Wonkam, who was not involved with the National Academies’ panel.

Race might be used in some studies to determine how genetic and social factors contribute to health disparities (SN: 4/5/22), but beyond that race has no real value in genetic research, Wonkam adds.

Researchers could use other identifiers, including geographical ancestry, to define groups of people in the study, Wonkam says. But those definitions need to be precise.

For instance, some researchers group Africans by language groups. But a Bantu-speaking person from Tanzania or Nigeria where malaria is endemic would have a much higher genetic risk of sickle cell disease than a Bantu-speaking person whose ancestors are from South Africa, where malaria has not existed for at least 1,000 years. (Changes in genes that make hemoglobin can protect against malaria (SN: 5/2/11), but cause life-threatening sickle cell disease.)
Genetic studies also have to account for movements of people and mixture between multiple groups, Wonkam says. And labeling must be consistent for all groups in the study, he says. Current studies sometimes compare continent-wide racial groups, such as Asian, with national groups, such as French or Finnish, and ethnic groups, such as Hispanic.

An argument for keeping race in rare cases
Removing race as a descriptor may be helpful for some groups, such as people of African descent, says Joseph Yracheta, a health disparities researcher and the executive director of the Native BioData Consortium, headquartered on the Cheyenne River Sioux reservation in South Dakota. “I understand why they want to get rid of race science for themselves, because in their case it’s been used to deny them services,” he says.

But Native Americans’ story is different, says Yracheta, who was not part of the panel. Native Americans’ unique evolutionary history have made them a valuable resource for genetics research. A small starting population and many thousands of years of isolation from humans outside the Americas have given Native Americans and Indigenous people in Polynesia and Australia some genetic features that may make it easier for researchers to find variants that contribute to health or disease, he says. “We’re the Rosetta stone for the rest of the planet.”

Native Americans “need to be protected, because not only are our numbers small, but we keep having things taken away from us since 1492. We don’t want this to be another casualty of colonialism.” Removing the label of Indigenous or Native American may erode tribal sovereignty and control over genetic data, he says.

The panel does recommend that genetic researchers should clearly state why they used a particular descriptor and should involve study populations in making decisions about which labels to use.

That community input is essential, Yracheta says. The recommendations have no legal or regulatory weight. So he worries that this lack of teeth may allow researchers to ignore the wishes of study participants without fear of penalty.

Still seeking diversity in research participants
Genetics research has suffered from a lack of diversity of participants (SN: 3/4/21). To counteract the disparities, U.S. government regulations require researchers funded by the National Institutes of Health to collect data on the race and ethnicity of study participants. But because those racial categories are too broad and don’t consider the social and environmental conditions that may affect health, the labels are not helpful in most genetic analyses, the panel concluded.

Removing racial labels won’t hamper diversity efforts, as researchers will still seek out people from different backgrounds to participate in studies, says Brendan Lee, who is president of the American Society of Human Genetics. But taking race out of the equation should encourage researchers to think more carefully about the type of data they are collecting and how it might be used to support or refute racism, says Lee, a medical geneticist at Baylor College of Medicine in Houston, who was not part of the panel.

The report offers decision-making tools for determining what descriptors are appropriate for particular types of studies. But “while it is a framework, it is not a recipe where in every study we do A, B and C,” Lee says.

Researchers probably won’t instantly adopt the new practices, Lee says. “It is a process that will take time. I don’t think it is something we can expect in one week or one evening that we’ll all change over to this, but it is a very important first step.”

One Antarctic ice shelf gets half its annual snowfall in just 10 days

Just a few powerful storms in Antarctica can have an outsized effect on how much snow parts of the southernmost continent get. Those ephemeral storms, preserved in ice cores, might give a skewed view of how quickly the continent’s ice sheet has grown or shrunk over time.

Relatively rare extreme precipitation events are responsible for more than 40 percent of the total annual snowfall across most of the continent — and in some places, as much as 60 percent, researchers report March 22 in Geophysical Research Letters.
Climatologist John Turner of the British Antarctic Survey in Cambridge and his colleagues used regional climate simulations to estimate daily precipitation across the continent from 1979 to 2016. Then, the team zoomed in on 10 locations — representing different climates from the dry interior desert to the often snowy coasts and the open ocean — to determine regional differences in snowfall.

While snowfall amounts vary greatly by location, extreme events packed the biggest wallop along Antarctica’s coasts, especially on the floating ice shelves, the researchers found. For instance, the Amery ice shelf in East Antarctica gets roughly half of its annual precipitation — which typically totals about half a meter of snow — in just 10 days, on average. In 1994, the ice shelf got 44 percent of its entire annual precipitation on a single day in September.

Ice cores aren’t just a window into the past; they are also used to predict the continent’s future in a warming world. So characterizing these coastal regions is crucial for understanding Antarctica’s ice sheet — and its potential future contribution to sea level rise.
Editor’s note: This story was updated April 5, 2019, to correct that the results were reported March 22 (not March 25).