Perovskites power up the solar industry

Tsutomu Miyasaka was on a mission to build a better solar cell. It was the early 2000s, and the Japanese scientist wanted to replace the delicate molecules that he was using to capture sunlight with a sturdier, more effective option.

So when a student told him about an unfamiliar material with unusual properties, Miyasaka had to try it. The material was “very strange,” he says, but he was always keen on testing anything that might respond to light.
Other scientists were running electricity through the material, called a perovskite, to generate light. Miyasaka, at Toin University of Yokohama in Japan, wanted to know if the material could also do the opposite: soak up sunlight and convert it into electricity. To his surprise, the idea worked. When he and his team replaced the light-sensitive components of a solar cell with a very thin layer of the perovskite, the illuminated cell pumped out a little bit of electric current.

The result, reported in 2009 in the Journal of the American Chemical Society, piqued the interest of other scientists, too. The perovskite’s properties made it (and others in the perovskite family) well-suited to efficiently generate energy from sunlight. Perhaps, some scientists thought, this perovskite might someday be able to outperform silicon, the light-absorbing material used in more than 90 percent of solar cells around the world.
Initial excitement quickly translated into promising early results. An important metric for any solar cell is how efficient it is — that is, how much of the sunlight that strikes its surface actually gets converted to electricity. By that standard, perovskite solar cells have shone, increasing in efficiency faster than any previous solar cell material in history. The meager 3.8 percent efficiency reported by Miyasaka’s team in 2009 is up to 22 percent this year. Today, the material is almost on par with silicon, which scientists have been tinkering with for more than 60 years to bring to a similar efficiency level.
“People are very excited because [perovskite’s] efficiency number has climbed so fast. It really feels like this is the thing to be working on right now,” says Jao van de Lagemaat, a chemist at the National Renewable Energy Laboratory in Golden, Colo.

Now, perovskite solar cells are at something of a crossroads. Lab studies have proved their potential: They are cheaper and easier to fabricate than time-tested silicon solar cells. Though perovskites are unlikely to completely replace silicon, the newer materials could piggyback onto existing silicon cells to create extra-effective cells. Perovskites could also harness solar energy in new applications where traditional silicon cells fall flat — as light-absorbing coatings on windows, for instance, or as solar panels that work on cloudy days or even absorb ambient sunlight indoors.

Whether perovskites can make that leap, though, depends on current research efforts to fix some drawbacks. Their tendency to degrade under heat and humidity, for example, is not a great characteristic for a product meant to spend hours in the sun. So scientists are trying to boost stability without killing efficiency.

“There are challenges, but I think we’re well on our way to getting this stuff stable enough,” says Henry Snaith, a physicist at the University of Oxford. Finding a niche for perovskites in an industry so dominated by silicon, however, requires thinking about solar energy in creative ways.

Leaping electrons
Perovskites flew under the radar for years before becoming solar stars. The first known perovskite was a mineral, calcium titanate, or CaTiO3, discovered in the 19th century. In more recent years, perovskites have expanded to a class of compounds with a similar structure and chemical recipe — a 1:1:3 ingredient ratio — that can be tweaked with different elements to make different “flavors.”

But the perovskites being studied for the light-absorbing layer of solar cells are mostly lab creations. Many are lead halide perovskites, which combine a lead ion and three ions of iodine or a related element, such as bromine, with a third type of ion (usually something like methylammonium). Those ingredients link together to form perovskites’ hallmark cagelike pyramid-on-pyramid structure. Swapping out different ingredients (replacing lead with tin, for instance) can yield many kinds of perovskites, all with slightly different chemical properties but the same basic crystal structure.

Perovskites owe their solar skills to the way their electrons interact with light. When sunlight shines on a solar panel, photons — tiny packets of light energy — bombard the panel’s surface like a barrage of bullets and get absorbed. When a photon is absorbed into the solar cell, it can share some of its energy with a negatively charged electron. Electrons are attracted to the positively charged nucleus of an atom. But a photon can give an electron enough energy to escape that pull, much like a video game character getting a power-up to jump a motorbike across a ravine. As the energized electron leaps away, it leaves behind a positively charged hole. A separate layer of the solar cell collects the electrons, ferrying them off as electric current.

The amount of energy needed to kick an electron over the ravine is different for every material. And not all photon power-ups are created equal. Sunlight contains low-energy photons (infrared light) and high-energy photons (sunburn-causing ultraviolet radiation), as well as all of the visible light in between.

Photons with too little energy “will just sail right on through” the light-catching layer and never get absorbed, says Daniel Friedman, a photovoltaic researcher at the National Renewable Energy Lab. Only a photon that comes in with energy higher than the amount needed to power up an electron will get absorbed. But any excess energy a photon carries beyond what’s needed to boost up an electron gets lost as heat. The more heat lost, the more inefficient the cell.
Because the photons in sunlight vary so much in energy, no solar cell will ever be able to capture and optimally use every photon that comes its way. So you pick a material, like silicon, that’s a good compromise — one that catches a decent number of photons but doesn’t waste too much energy as heat, Friedman says.

Although it has dominated the solar cell industry, silicon can’t fully use the energy from higher-energy photons; the material’s solar conversion efficiency tops out at around 30 percent in theory and has hit 20-some percent in practice. Perovskites could do better. The electrons inside perovskite crystals require a bit more energy to dislodge. So when higher-energy photons come into the solar cell, they devote more of their energy to dislodging electrons and generating electric current, and waste less as heat. Plus, by changing the ingredients and their ratios in a perovskite, scientists can adjust the photons it catches. Using different types of perovskites across multiple layers could allow solar cells to more effectively absorb a broader range of photons.

Perovskites have a second efficiency perk. When a photon excites an electron inside a material and leaves behind a positively charged hole, there’s a tendency for the electron to slide right back into a hole. This recombination, as it’s known, is inefficient — an electron that could have fed an electric current instead just stays put.

In perovskites, though, excited electrons usually migrate quite far from their holes, Snaith and others have found by testing many varieties of the material. That boosts the chances the electrons will make it out of the perovskite layer without landing back in a hole.

“It’s a very rare property,” Miyasaka says. It makes for an efficient sunlight absorber.

Some properties of perovskites also make them easier than silicon to turn into solar cells. Making a conventional silicon solar cell requires many steps, all done in just the right order at just the right temperature — something like baking a fragile soufflé. The crystals of silicon have to be perfect, because even small defects in the material can hurt its efficiency. The need for such precision makes silicon solar cells more expensive to produce.

Perovskites are more like brownies from a box — simpler, less finicky. “You can make it in an office, basically,” says materials scientist Robert Chang of Northwestern University in Evanston, Ill. He’s exaggerating, but only a little. Perovskites are made by essentially mixing a bunch of ingredients together and depositing them on a surface in a thin, even film. And while making crystalline silicon requires temperatures up to 2000° Celsius, perovskite crystals form at easier-to-reach temperatures — lower than 200°.

Seeking stability
In many ways, perovskites have become even more promising solar cell materials over time, as scientists have uncovered exciting new properties and finessed the materials’ use. But no material is perfect. So now, scientists are searching for ways to overcome perovskites’ real-world limitations. The most pressing issue is their instability, van de Lagemaat says. The high efficiency levels reported from labs often last only days or hours before the materials break down.

Tackling stability is a less flashy problem than chasing efficiency records, van de Lagemaat points out, which is perhaps why it’s only now getting attention. Stability isn’t a single number that you can flaunt, like an efficiency value. It’s also a bit harder to define, especially since how long a solar cell lasts depends on environmental conditions like humidity and precipitation levels, which vary by location.

Encapsulating the cell with water-resistant coatings is one strategy, but some scientists want to bake stability into the material itself. To do that, they’re experimenting with different perovskite designs. For instance, solar cells containing stacks of flat, graphenelike sheets of perovskites seem to hold up better than solar cells with the standard three-dimensional crystal and its interwoven layers.

In these 2-D perovskites, some of the methylammonium ions are replaced by something larger, like butylammonium. Swapping in the bigger ion forces the crystal to form in sheets just nanometers thick, which stack on top of each other like pages in a book, says chemist Aditya Mohite of Los Alamos National Laboratory in New Mexico. The butylammonium ion, which naturally repels water, forms spacer layers between the 2-D sheets and stops water from permeating into the crystal.
Getting the 2-D layers to line up just right has proved tricky, Mohite says. But by precisely controlling the way the layers form, he and colleagues created a solar cell that runs at 12.5 percent efficiency while standing up to light and humidity longer than a similar 3-D model, the team reported in 2016 in Nature. Although it was protected with a layer of glass, the 3-D perovskite solar cell lost performance rapidly, within a few days, while the 2-D perovskite withered only slightly. (After three months, the 2-D version was still working almost as well as it had been at the beginning.)

Despite the seemingly complex structure of the 2-D perovskites, they are no more complicated to make than their 3-D counterparts, says Mercouri Kanatzidis, a chemist at Northwestern and a collaborator on the 2-D perovskite project. With the right ingredients, he says, “they form on their own.”

His goal now is to boost the efficiency of 2-D perovskite cells, which don’t yet match up to their 3-D counterparts. And he’s testing different water-repelling ions to reach an ideal stability without sacrificing efficiency.

Other scientists have mixed 2-D and 3-D perovskites to create an ultra-long-lasting cell — at least by perovskite standards. A solar panel made of these cells ran at only 11 percent efficiency, but held up for 10,000 hours of illumination, or more than a year, according to research published in June in Nature Communications. And, importantly, that efficiency was maintained over an area of about 50 square centimeters, more on par with real-world conditions than the teeny-tiny cells made in most research labs.

A place for perovskites?
With boosts to their stability, perovskite solar cells are getting closer to commercial reality. And scientists are assessing where the light-capturing material might actually make its mark.

Some fans have pitted perovskites head-to-head with silicon, suggesting the newbie could one day replace the time-tested material. But a total takeover probably isn’t a realistic goal, says Sarah Kurtz, codirector of the National Center for Photovoltaics at the National Renewable Energy Lab.

“People have been saying for decades that silicon can’t get lower in cost to meet our needs,” Kurtz says. But, she points out, the price of solar energy from silicon-based panels has dropped far lower than people originally expected. There are a lot of silicon solar panels out there, and a lot of commercial manufacturing plants already set up to deal with silicon. That’s a barrier to a new technology, no matter how great it is. Other silicon alternatives face the same limitation. “Historically, silicon has always been dominant,” Kurtz says.
For Snaith, that’s not a problem. He cofounded Oxford Photo-voltaics Limited, one of the first companies trying to commercialize perovskite solar cells. His team is developing a solar cell with a perovskite layer over a standard silicon cell to make a super-efficient double-decker cell. That way, Snaith says, the team can capitalize on the massive amount of machinery already set up to build commercial silicon solar cells.
A perovskite layer on top of silicon would absorb higher-energy photons and turn them into electricity. Lower-energy photons that couldn’t excite the perovskite’s electrons would pass through to the silicon layer, where they could still generate current. By combining multiple materials in this way, it’s possible to catch more photons, making a more efficient cell.

That idea isn’t new, Snaith points out: For years, scientists have been layering various solar cell materials in this way. But these double-decker cells have traditionally been expensive and complicated to make, limiting their applications. Perovskites’ ease of fabrication could change the game. Snaith’s team is seeing some improvement already, bumping the efficiency of a silicon solar cell from 10 to 23.6 percent by adding a perovskite layer, for example. The team reported that result online in February in Nature Energy.

Rather than compete with silicon solar panels for space on sunny rooftops and in open fields, perovskites could also bring solar energy to totally new venues.

“I don’t think it’s smart for perovskites to compete with silicon,” Miyasaka says. Perovskites excel in other areas. “There’s a whole world of applications where silicon can’t be applied.”

Silicon solar cells don’t work as well on rainy or cloudy days, or indoors, where light is less direct, he says. Perovskites shine in these situations. And while traditional silicon solar cells are opaque, very thin films of perovskites could be printed onto glass to make sunlight-capturing windows. That could be a way to bring solar power to new places, turning glassy skyscrapers into serious power sources, for example. Perovskites could even be printed on flexible plastics to make solar-powered coatings that charge cell phones.

That printing process is getting closer to reality: Scientists at the University of Toronto recently reported a way to make all layers of a perovskite solar cell at temperatures below 150° — including the light-absorbing perovskite layer, but also the background workhorse layers that carry the electrons away and funnel them into current. That could streamline and simplify the production process, making mass newspaper-style printing of perovskite solar cells more doable.

Printing perovskite solar cells on glass is also an area of interest for Oxford Photovoltaics, Snaith says. The company’s ultimate target is to build a perovskite cell that will last 25 years, as long as a traditional silicon cell.

Moon had a magnetic field for at least a billion years longer than thought

The moon had a magnetic field for at least 2 billion years, or maybe longer.

Analysis of a relatively young rock collected by Apollo astronauts reveals the moon had a weak magnetic field until 1 billion to 2.5 billion years ago, at least a billion years later than previous data showed. Extending this lifetime offers insights into how small bodies generate magnetic fields, researchers report August 9 in Science Advances. The result may also suggest how life could survive on tiny planets or moons.
“A magnetic field protects the atmosphere of a planet or moon, and the atmosphere protects the surface,” says study coauthor Sonia Tikoo, a planetary scientist at Rutgers University in New Brunswick, N.J. Together, the two protect the potential habitability of the planet or moon, possibly those far beyond our solar system.

The moon does not currently have a global magnetic field. Whether one ever existed was a question debated for decades (SN: 12/17/11, p. 17). On Earth, molten rock sloshes around the outer core of the planet over time, causing electrically conductive fluid moving inside to form a magnetic field. This setup is called a dynamo. At 1 percent of Earth’s mass, the moon would have cooled too quickly to generate a long-lived roiling interior.
Magnetized rocks brought back by Apollo astronauts, however, revealed that the moon must have had some magnetizing force. The rocks suggested that the magnetic field was strong at least 4.25 billion years ago, early on in the moon’s history, but then dwindled and maybe even got cut off about 3.1 billion years ago.
Tikoo and colleagues analyzed fragments of a lunar rock collected along the southern rim of the moon’s Dune Crater during the Apollo 15 mission in 1971. The team determined the rock was 1 billion to 2.5 billion years old and found it was magnetized. The finding suggests the moon had a magnetic field, albeit a weak one, when the rock formed, the researchers conclude.
A drop in the magnetic field strength suggests the dynamo driving it was generated in two distinct ways, Tikoo says. Early on, Earth and the moon would have sat much closer together, allowing Earth’s gravity to tug on and spin the rocky exterior of the moon. That outer layer would have dragged against the liquid interior, generating friction and a very strong magnetic field (SN Online: 12/4/14).

Then slowly, starting about 3.5 billion years ago, the moon moved away from Earth, weakening the dynamo. But by that point, the moon would have started to cool, causing less dense, hotter material in the core to rise and denser, cooler material to sink, as in Earth’s core. This roiling of material would have sustained a weak field that lasted for at least a billion years, until the moon’s interior cooled, causing the dynamo to die completely, the team suggests.

The two-pronged explanation for the moon’s dynamo is “an entirely plausible idea,” says planetary scientist Ian Garrick-Bethell of the University of California, Santa Cruz. But researchers are just starting to create computer simulations of the strength of magnetic fields to understand how such weaker fields might arise. So it is hard to say exactly what generated the lunar dynamo, he says.

If the idea is correct, it may mean other small planets and moons could have similarly weak, long-lived magnetic fields. Having such an enduring shield could protect those bodies from harmful radiation, boosting the chances for life to survive.

Here are the paths of the next 15 total solar eclipses

August’s total solar eclipse won’t be the last time the moon cloaks the sun’s light. From now to 2040, for example, skywatchers around the globe can witness 15 such events.

Their predicted paths aren’t random scribbles. Solar eclipses occur in what’s called a Saros cycle — a period that lasts about 18 years, 11 days and eight hours, and is governed by the moon’s orbit. (Lunar eclipses follow a Saros cycle, too, which the Chaldeans first noticed probably around 500 B.C.)

Two total solar eclipses separated by that 18-years-and-change period are almost twins — compare this year’s eclipse with the Sept. 2, 2035 eclipse, for example. They take place at roughly the same time of year, at roughly the same latitude and with the moon at about the same distance from Earth. But those extra eight hours, during which the Earth has rotated an additional third of the way on its axis, shift the eclipse path to a different part of the planet.
This cycle repeats over time, creating a family of eclipses called a Saros series. A series lasts 12 to 15 centuries and includes about 70 or more eclipses. The solar eclipses of 2019 and 2037 belong to a different Saros series, so their paths too are shifted mimics. Their tracks differ in shape from 2017’s, because the moon is at a different place in its orbit when it passes between the Earth and the sun. Paths are wider at the poles because the moon’s shadow is hitting the Earth’s surface at a steep angle.

Predicting and mapping past and future eclipses allows scientists “to examine the patterns of eclipse cycles, the most prominent of which is the Saros,” says astrophysicist Fred Espenak, who is retired from NASA’s Goddard Spaceflight Center in Greenbelt, Md.

He would know. Espenak and his colleague Jean Meeus, a retired Belgian astronomer, have mapped solar eclipse paths from 2000 B.C. to A.D. 3000. For archaeologists and historians peering backward, the maps help match up accounts of long-ago eclipses with actual paths. For eclipse chasers peering forward, the data are an itinerary.

“I got interested in figuring out how to calculate eclipse paths for my own use, for planning … expeditions,” says Espenak, who was 18 when he witnessed his first total solar eclipse. It was in 1970, and he secured permission to drive the family car from southern New York to North Carolina to see it. Since then, Espenak, nicknamed “Mr. Eclipse,” has been to every continent, including Antarctica, for a total eclipse of the sun.

“It’s such a dramatic, spectacular, beautiful event,” he says. “You only get a few brief minutes, typically, of totality before it ends. After it’s over, you’re craving to see it again.”

Rumors swirl that LIGO snagged gravitational waves from a neutron star collision

Speculation is running rampant about potential new discoveries of gravitational waves, just as the latest search wound down August 25.

Publicly available logs from astronomical observatories indicate that several telescopes have been zeroing in on one particular region of the sky, potentially in response to a detection of ripples in spacetime by the Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO. These records have raised hopes that, for the first time, scientists may have glimpsed electromagnetic radiation — light — produced in tandem with gravitational waves. That light would allow scientists to glean more information about the waves’ source. Several tweets from astronomers reporting rumors of a new LIGO detection have fanned the flames of anticipation and amplified hopes that the source may be a cosmic convulsion unlike any LIGO has seen before.
“There is a lot of excitement,” says astrophysicist Rosalba Perna of Stony Brook University in New York, who is not involved with the LIGO collaboration. “We are all very anxious to actually see the announcement.”

An Aug. 25 post on the LIGO collaboration’s website announced the end of the current round of data taking, which began November 30, 2016. Virgo, a gravitational wave detector in Italy, had joined forces with LIGO’s two on August 1 (SN Online: 8/1/17). The three detectors will now undergo upgrades to improve their sensitivity. The update noted that “some promising gravitational-wave candidates have been identified in data from both LIGO and Virgo during our preliminary analysis, and we have shared what we currently know with astronomical observing partners.”

When LIGO detects gravitational waves, the collaboration alerts astronomers to the approximate location the waves seemed to originate from. The hope is that a telescope could pick up light from the aftermath of the cosmic catastrophe that created the gravitational waves — although no light has been found in previous detections.

LIGO previously detected three sets of gravitational waves from merging black holes (SN: 6/24/17, p. 6). Black hole coalescences aren’t expected to generate light that could be spotted by telescopes, but another prime candidate could: a smashup between two remnants of stars known as neutron stars. Scientists have been eagerly awaiting LIGO’s first detections of such mergers, which are suspected to be the sites where the universe’s heaviest elements are formed. An observation of a neutron star crash also could provide information about the ultradense material that makes up neutron stars.
Since mid-August, seemingly in response to a LIGO alert, several telescopes have observed a section of sky around the galaxy NGC 4993, located 134 million light-years away in the constellation Hydra. The Hubble Space Telescope has made at least three sets of observations in that vicinity, including one on August 22 seeking “observations of the first electromagnetic counterparts to gravitational wave sources.”

Likewise, the Chandra X-ray Observatory targeted the same region of sky on August 19. And records from the Gemini Observatory’s telescope in Chile indicate several potentially related observations, including one referencing “an exceptional LIGO/Virgo event.”

“I think it’s very, very likely that LIGO has seen something,” says astrophysicist David Radice of Princeton University, who is not affiliated with LIGO. But, he says, he doesn’t know whether its source has been confirmed as merging neutron stars.

LIGO scientists haven’t commented directly on the veracity of the rumor. “We have some substantial work to do before we will be able to share with confidence any quantitative results. We are working as fast as we can,” LIGO spokesperson David Shoemaker of MIT wrote in an e-mail.

Tabby’s star is probably just dusty, and still not an alien megastructure

Alien megastructures are out. The unusual fading of an oddball star is more likely caused by either clouds of dust or an abnormal cycle of brightening and dimming, two new papers suggest.

Huan Meng of the University of Arizona in Tucson and his colleagues suggest that KIC 8462852, known as Tabby’s star, is dimming thanks to an orbiting cloud of fine dust particles. The team observed the star with the infrared Spitzer and ultraviolet Swift space telescopes from October 2015 to December 2016 — the first observations in multiple wavelengths of light. They found that the star is dimming faster in short blue wavelengths than longer infrared ones, suggesting smaller particles.
“That almost absolutely ruled out the alien megastructure scenario, unless it’s an alien microstructure,” Meng says.

Tabby’s star is most famous for suddenly dropping in brightness by up to 22 percent over the course of a few days (SN Online: 2/2/16). Later observations suggested the star is also fading by about 4 percent per year (SN: 9/17/16, p. 12), which Meng’s team confirmed in a paper posted online August 24 at arXiv.org.

Joshua Simon of the Observatories of the Carnegie Institution for Science in Pasadena, Calif., found a similar dimming in data on Tabby’s star from the All Sky Automated Survey going back to 2006. Simon and colleagues also found for the first time that the star grew brighter in 2014, and possibly in 2006, they reported in a paper August 25 at arXiv.org.

“That’s fascinating,” says astrophysicist Tabetha Boyajian of Louisiana State University in Baton Rouge. She first reported the star’s flickers in 2015 (the star is nicknamed for her) and is a coauthor on Meng’s paper. “We always speculated that it would brighten sometime. It can’t just get fainter all the time — otherwise it would disappear. This shows that it does brighten.”

The brightening could be due to a magnetic cycle like the sun’s, Simon suggests. But no known cycle makes a star brighten and dim by quite so much, so the star would still be odd.
Brian Metzger of Columbia University previously suggested that a ripped-up planet falling in pieces into the star could explain both the long-term and short-term dimming. He thinks that model still works, although it needs some tweaks.

“This adds some intrigue to what’s going on, but I don’t think it really changes the landscape,” says Metzger, who was not involved in the new studies. And newer observations could complicate things further: The star went through another bout of dimming between May and July. “I’m waiting to see the papers analyzing this recent event,” Metzger says.

50 years ago, West Germany embraced nuclear power

West German power companies have decided to go ahead with two nuclear power station projects…. Compared with the U.S. and Britain, Germany has been relatively backward in the application of nuclear energy…. The slow German start is only partly the result of restrictions placed upon German nuclear research after the war. — Science News, September 16, 1967

Update
Both East and West Germany embraced nuclear power until antinuclear protests in the 1970s gathered steam. In 1998, the unified German government began a nuclear phaseout, which Chancellor Angela Merkel halted in 2009. The 2011 Fukushima nuclear disaster in Japan caused a rapid reversal. Germany closed eight of its nuclear plants immediately, and announced that all nuclear power in the country would go dark by 2022 (SN Online: 6/1/11). A pivot to renewable energy — wind, solar, hydropower and biomass — produced 188 billion kilowatt-hours of electricity in 2016, nearly 32 percent of German electricity usage.

Hidden hoard hints at how ancient elites protected the family treasures

BOSTON — Long before anyone opened a bank account or rented a safe deposit box, wealth protection demanded a bit of guile and a broken beer jug. A 3,100-year-old jewelry stash was discovered in just such a vessel, unearthed from an ancient settlement in Israel called Megiddo in 2010. Now the find is providing clues to how affluent folk hoarded their valuables at a time when fortunes rested on fancy metalwork, not money.

At the fortress city of Megiddo, a high-ranking Canaanite family stashed jewelry in a beer jug and hid it in a courtyard’s corner under a bowl, possibly under a veil of cloth, Eran Arie of the Israel Museum in Jerusalem, said November 17 at the annual meeting of the American Schools of Oriental Research.
The hoard’s owners removed the jug’s neck and inserted a bundle of 35 silver items, including earrings and a bracelet, which were wrapped in two linen cloths. Other valuables were then added to the jug, including around 1,300 beads of silver and electrum — an alloy of gold and silver — that had probably been threaded into an elaborate necklace. There were 10 additional pieces of electrum jewelry, including a pair of basket-shaped earrings, each displaying a carved, long-legged bird.
A Canaanite city palace stood only about 30 meters from the Iron Age building that housed the courtyard, Arie said. Due to the lesser building’s strategic location, its inhabitants must have held key government positions, he proposed. “For the family that lived there, the hoard represented the lion’s share of their wealth.” Those family members presumably fled around the time the structure that held the jewelry hoard was destroyed in a catastrophic event, possibly a battle.
The Megiddo hoard was hidden but not buried, giving its owners quick access to their valuables. But no one ever retrieved the treasure. “We will never know why no one returned to claim this hoard,” Arie said.

False alarms may be a necessary part of earthquake early warnings

Earthquake warning systems face a tough trade-off: To give enough time to take cover or shut down emergency systems, alerts may need to go out before it’s clear how strong the quake will be. And that raises the risk of false alarms, undermining confidence in any warning system.

A new study aims to quantify the best-case scenario for warning time from a hypothetical earthquake early warning system. The result? There is no magic formula for deciding when to issue an alert, the researchers report online March 21 in Science Advances.
“We have a choice when issuing earthquake warnings,” says study leader Sarah Minson, a seismologist at the U.S. Geological Survey, or USGS, in Menlo Park, Calif. “You have to think about your relative risk appetite: What is the cost of taking action versus the cost of the damage you’re trying to prevent?”

For locations far from a large quake’s origin, waiting for clear signs of risk before sending an alert may mean waiting too long for people to be able to take protective action. But for those tasked with managing critical infrastructure, such as airports, trains or nuclear power plants, an early warning even if false may be preferable to an alert coming too late (SN: 4/19/14, p. 16).

Alerts issued by earthquake early warning systems, called EEWs, are based on several parameters: the depth and location of the quake’s origin, its estimated magnitude and the ground properties, such as the types of soil and rock that seismic waves would travel through.

“The trick to earthquake early warning systems is that it’s a misnomer,” Minson says. Such systems don’t warn that a quake is imminent. Instead, they alert people that a quake has already happened, giving them precious seconds — perhaps a minute or two — to prepare for imminent ground shaking.
Estimating magnitude turns out to be a sticking point. It is impossible to distinguish a powerful earthquake in its earliest stages from a small, weak quake, according to a 2016 study by a team of researchers that included Men-Andrin Meier, a seismologist at Caltech who also coauthors the new study. Estimating magnitude for larger quakes also takes more time, because the rupture of the fault lasts perhaps several seconds longer – a significant chunk of time when it comes to EEW. And there is a trade-off in terms of distance: For locations farther away, there is less certainty the shaking will reach that far.
In the new study, Minson, Meier and colleagues used standard ground-motion prediction equations to calculate the minimum quake magnitude that would produce shaking at any distance. Then, they calculated how quickly an EEW could estimate whether the quake would exceed that minimum magnitude to qualify for an alert. Finally, the team estimated how long it would take for the shaking to strike a location. Ultimately, they determined, EEW holds the greatest benefit for users who are willing to take action early, even with the risk of false alarms. The team hopes its paper provides a framework to help emergency response managers make those decisions.

EEWs are already in operation around the world, from Mexico to Japan. USGS, in collaboration with researchers and universities, has been developing the ShakeAlert system for the earthquake-prone U.S. West Coast. It is expected be rolled out this year, although plans for future expansion may be in jeopardy: President Trump’s proposed 2019 budget cuts the USGS program’s $8.2 million in funding. It’s unclear whether Congress will spare those funds.

The value of any alert system will ultimately depend on whether it fulfills its objective — getting people to take cover swiftly in order to save lives. “More than half of injuries from past earthquakes are associated with things falling on people,” says Richard Allen, a seismologist at the University of California, Berkeley who was not involved in the new study. “A few seconds of warning can more than halve the number of injuries.”

But the researchers acknowledge there is a danger in issuing too many false alarms. People may become complacent and ignore future warnings. “We are playing a precautionary game,” Minson says. “It’s a warning system, not a guarantee.”

Microplastics are in our bodies. Here’s why we don’t know the health risks

Tiny particles of plastic have been found everywhere — from the deepest place on the planet, the Mariana Trench, to the top of Mount Everest. And now more and more studies are finding that microplastics, defined as plastic pieces less than 5 millimeters across, are also in our bodies.

“What we are looking at is the biggest oil spill ever,” says Maria Westerbos, founder of the Plastic Soup Foundation, an Amsterdam-based nonprofit advocacy organization that works to reduce plastic pollution around the world. Nearly all plastics are made from fossil fuel sources. And microplastics are “everywhere,” she adds, “even in our bodies.”
In recent years, microplastics have been documented in all parts of the human lung, in maternal and fetal placental tissues, in human breast milk and in human blood. Microplastics scientist Heather Leslie, formerly of Vrije Universiteit Amsterdam, and colleagues found microplastics in blood samples from 17 of 22 healthy adult volunteers in the Netherlands. The finding, published last year in Environment International, confirms what many scientists have long suspected: These tiny bits can get absorbed into the human bloodstream.

“We went from expecting plastic particles to be absorbable and present in the human bloodstream to knowing that they are,” Leslie says.
The findings aren’t entirely surprising; plastics are all around us. Durable, versatile and cheap to manufacture, they are in our clothes, cosmetics, electronics, tires, packaging and so many more items of daily use. And the types of plastic materials on the market continues to increase. “There were around 3,000 [plastic materials] when I started researching microplastics over a decade ago,” Leslie says. “Now there are over 9,600. That’s a huge number, each with its own chemical makeup and potential toxicity.”

Though durable, plastics do degrade, by weathering from water, wind, sunlight or heat — as in ocean environments or in landfills — or by friction, in the case of car tires, which releases plastic particles along roadways during motion and braking.

In addition to studying microplastic particles, researchers are also trying to get a handle on nanoplastics, particles which are less than 1 micrometer in length. “The large plastic objects in the environment will break down into micro- and nanoplastics, constantly raising particle numbers,” says toxicologist Dick Vethaak of the Institute for Risk Assessment Sciences at Utrecht University in the Netherlands, who collaborated with Leslie on the study finding microplastics in human blood.

Nearly two decades ago, marine biologists began drawing attention to the accumulation of microplastics in the ocean and their potential to interfere with organism and ecosystem health (SN: 2/20/16, p. 20). But only in recent years have scientists started focusing on microplastics in people’s food and drinking water — as well as in indoor air.

Plastic particles are also intentionally added to cosmetics like lipstick, lip gloss and eye makeup to improve their feel and finish, and to personal care products, such as face scrubs, toothpastes and shower gels, for the cleansing and exfoliating properties. When washed off, these microplastics enter the sewage system. They can end up in the sewage sludge from wastewater treatment plants, which is used to fertilize agricultural lands, or even in treated water released into waterways.

What if any damage microplastics may do when they get into our bodies is not clear, but a growing community of researchers investigating these questions thinks there is reason for concern. Inhaled particles might irritate and damage the lungs, akin to the damage caused by other particulate matter. And although the composition of plastic particles varies, some contain chemicals that are known to interfere with the body’s hormones.

Currently there are huge knowledge gaps in our understanding of how these particles are processed by the human body.

How do microplastics get into our bodies?
Research points to two main entry routes into the human body: We swallow them and we breathe them in.

Evidence is growing that our food and water is contaminated with microplastics. A study in Italy, reported in 2020, found microplastics in everyday fruits and vegetables. Wheat and lettuce plants have been observed taking up microplastic particles in the lab; uptake from soil containing the particles is probably how they get into our produce in the first place.

Sewage sludge can contain microplastics not only from personal care products, but also from washing machines. One study looking at sludge from a wastewater treatment plant in southwest England found that if all the treated sludge produced there were used to fertilize soils, a volume of microplastic particles equivalent to what is found in more than 20,000 plastic credit cards could potentially be released into the environment each month.

On top of that, fertilizers are coated with plastic for controlled release, plastic mulch film is used as a protective layer for crops and water containing microplastics is used for irrigation, says Sophie Vonk, a researcher at the Plastic Soup Foundation.

“Agricultural fields in Europe and North America are estimated to receive far higher quantities of microplastics than global oceans,” Vonk says.
A recent pilot study commissioned by the Plastic Soup Foundation found microplastics in all blood samples collected from pigs and cows on Dutch farms, showing livestock are capable of absorbing some of the plastic particles from their feed, water or air. Of the beef and pork samples collected from farms and supermarkets as part of the same study, 75 percent showed the presence of microplastics. Multiple studies document that microplastic particles are also in fish muscle, not just the gut, and so are likely to be consumed when people eat seafood.

Microplastics are in our drinking water, whether it’s from the tap or bottled. The particles may enter the water at the source, during treatment and distribution, or, in the case of bottled water, from its packaging.

Results from studies attempting to quantify levels of human ingestion vary dramatically, but they suggest people might be consuming on the order of tens of thousands of microplastic particles per person per year. These estimates may change as more data come in, and they will likely vary depending on people’s diets and where they live. Plus, it is not yet clear how these particles are absorbed, distributed, metabolized and excreted by the human body, and if not excreted immediately, how long they might stick around.

Babies might face particularly high exposures. A small study of six infants and 10 adults found that the infants had more microplastic particles in their feces than the adults did. Research suggests microplastics can enter the fetus via the placenta, and babies could also ingest the particles via breast milk. The use of plastic feeding bottles and teething toys adds to children’s microplastics exposure.

Microplastic particles are also floating in the air. Research conducted in Paris to document microplastic levels in indoor air found concentrations ranging from three to 15 particles per cubic meter of air. Outdoor concentrations were much lower.

Airborne particles may turn out to be more of a concern than those in food. One study reported in 2018 compared the amount of microplastics present within mussels harvested off Scotland’s coasts with the amount of microplastics present in indoor air. Exposure to microplastic fibers from the air during the meal was far higher than the risk of ingesting microplastics from the mussels themselves.

Extrapolating from this research, immunologist Nienke Vrisekoop of the University Medical Center Utrecht says, “If I keep a piece of fish on the table for an hour, it has probably gathered more microplastics from the ambient air than it has from the ocean.”
What’s more, a study of human lung tissue reported last year offers solid evidence that we are breathing in plastic particles. Microplastics showed up in 11 of 13 samples, including those from the upper, middle and lower lobes, researchers in England reported.

Perhaps good news: Microplastics seem unable to penetrate the skin. “The epidermis holds off quite a lot of stuff from the outside world, including [nano]particles,” Leslie says. “Particles can go deep into your skin, but so far we haven’t observed them passing the barrier, unless the skin is damaged.”

What do we know about the potential health risks?
Studies in mice suggest microplastics are not benign. Research in these test animals shows that lab exposure to microplastics can disrupt the gut microbiome, lead to inflammation, lower sperm quality and testosterone levels, and negatively affect learning and memory.

But some of these studies used concentrations that may not be relevant to real-world scenarios. Studies on the health effects of exposure in humans are just getting under way, so it could be years before scientists understand the actual impact in people.

Immunologist Barbro Melgert of the University of Groningen in the Netherlands has studied the effects of nylon microfibers on human tissue grown to resemble lungs. Exposure to nylon fibers reduced both the number and size of airways that formed in these tissues by 67 percent and 50 percent, respectively. “We found that the cause was not the microfibers themselves but rather the chemicals released from them,” Melgert says.

“Microplastics could be considered a form of air pollution,” she says. “We know air pollution particles tend to induce stress in our lungs, and it will probably be the same for microplastics.”

Vrisekoop is studying how the human immune system responds to microplastics. Her unpublished lab experiments suggest immune cells don’t recognize microplastic particles unless they have blood proteins, viruses, bacteria or other contaminants attached. But it is likely that such bits will attach to microplastic particles out in the environment and inside the body.

“If the microplastics are not clean … the immune cells [engulf] the particle and die faster because of it,” Vrisekoop says. “More immune cells then rush in.” This marks the start of an immune response to the particle, which could potentially trigger a strong inflammatory reaction or possibly aggravate existing inflammatory diseases of the lungs or gastrointestinal tract.
Some of the chemicals added to make plastic suitable for particular uses are also known to cause problems for humans: Bisphenol A, or BPA, is used to harden plastic and is a known endocrine disruptor that has been linked to developmental effects in children and problems with reproductive systems and metabolism in adults (SN: 7/18/09, p. 5). Phthalates, used to make plastic soft and flexible, are associated with adverse effects on fetal development and reproductive problems in adults along with insulin resistance and obesity. And flame retardants that make electronics less flammable are associated with endocrine, reproductive and behavioral effects.

“Some of these chemical products that I worked on in the past [like the polybrominated diphenyl ethers used as flame retardants] have been phased out or are prohibited to use in new products now [in the European Union and the United States] because of their neurotoxic or disrupting effects,” Leslie says.
Concerning chemicals
Bits of plastic floating in the world’s air and water contain chemicals that may pose risks to human health. A 2021 study identified more than 2,400 chemicals of potential concern found in plastics or used in their processing. Here are a few of the most worrisome.

Short-chain chlorinated paraffins are used as lubricants, flame retardants and plasticizers. They can cause cancer in lab rodents, but the mechanisms may not be relevant for human health.
The chlorinated compound mirex was once used as a flame retardant and can persist in the environment. It’s suspected of being a human carcinogen and may affect fertility.
2,4,6-Tri-tert-butylphenol is an antioxidant and ultraviolet stabilizer, added to plastics to prevent degradation. There’s evidence that it causes liver damage in lab animals with prolonged or repeated exposure.
Benzo(a)pyrene is a polyaromatic hydrocarbon that can be released when organic matter such as coal or wood burns. It is also produced in grilled meats. It has been shown to cause cancer, damage fertility and affect development in lab animals.
Dibutyl phthalate is a plasticizer that is known to cause endocrine disruption, may interfere with male fertility and has been shown to affect fetal development in lab animals.
Tetrabromobisphenol-A is a flame retardant that can cause cancer in lab animals and may be an endocrine disruptor. It is chemically related to bisphenol A, which has been linked to developmental effects in children.
SOURCE: H. WIESINGER, Z. WANG AND S. HELLWEG/ENVIRONMENTAL SCIENCE & TECHNOLOGY 2021
What are the open questions?
The first step in determining the risk of microplastics to human health is to better understand and quantify human exposure. Polyrisk — one of five large-scale research projects under CUSP, a multidisciplinary group of researchers and experts from 75 organizations across 21 European countries studying micro- and nanoplastics — is doing exactly that.

Immunotoxicologist Raymond Pieters, of the Institute for Risk Assessment Sciences at Utrecht University and coordinator of Polyrisk, and colleagues are studying people’s inhalation exposure in a number of real-life scenarios: near a traffic light, for example, where cars are likely to be braking, versus a highway, where vehicles are continuously moving. Other scenarios under study include an indoor sports stadium, as well as occupational scenarios like the textile and rubber industry.

Melgert wants to know how much microplastic is in our houses, what the particle sizes are and how much we breathe in. “There are very few studies looking at indoor levels [of microplastics],” she says. “We all have stuff in our houses — carpets, insulation made of plastic materials, curtains, clothes — that all give off fibers.”

Vethaak, who co-coordinates MOMENTUM, a consortium of 27 research and industry partners from the Netherlands and seven other countries studying microplastics’ potential effects on human health, is quick to point out that “any measurement of the degree of exposure to plastic particles is likely an underestimation.” In addition to research on the impact of microplastics, the group is also looking at nanoplastics. Studying and analyzing these smallest of plastics in the environment and in our bodies is extremely challenging. “The analytical tools and techniques required for this are still being developed,” Vethaak says.

Vethaak also wants to understand whether microplastic particles coated with bacteria and viruses found in the environment could spread these pathogens and increase infection rates in people. Studies have suggested that microplastics in the ocean can serve as safe havens for germs.

Alongside knowing people’s level of exposure to microplastics, the second big question scientists want to understand is what if any level of real-world exposure is harmful. “This work is confounded by the multitude of different plastic particle types, given their variations in size, shape and chemical composition, which can affect uptake and toxicity,” Leslie says. “In the case of microplastics, it will take several more years to determine what the threshold dose for toxicity is.”

Several countries have banned the use of microbeads in specific categories of products, including rinse-off cosmetics and toothpastes. But there are no regulations or policies anywhere in the world that address the release or concentrations of other microplastics — and there are very few consistent monitoring efforts. California has recently taken a step toward monitoring by approving the world’s first requirements for testing microplastics in drinking water sources. The testing will happen over the next several years.

Pieters is very pragmatic in his outlook: “We know ‘a’ and ‘b,’” he says. “So we can expect ‘c,’ and ‘c’ would [imply] a risk for human health.”

He is inclined to find ways to protect people now even if there is limited or uncertain scientific knowledge. “Why not take a stand for the precautionary principle?” he asks.

For people who want to follow Pieters’ lead, there are ways to reduce exposure.

“Ventilate, ventilate, ventilate,” Melgert says. She recommends not only proper ventilation, including opening your windows at home, but also regular vacuum cleaning and air purification. That can remove dust, which often contains microplastics, from surfaces and the air.

Consumers can also choose to avoid cosmetics and personal care products containing microbeads. Buying clothes made from natural fabrics like cotton, linen and hemp, instead of from synthetic materials like acrylic and polyester, helps reduce the shedding of microplastics during wear and during the washing process.

Specialized microplastics-removal devices, including laundry balls, laundry bags and filters that attach to washing machines, are designed to reduce the number of microfibers making it into waterways.

Vethaak recommends not heating plastic containers in the microwave, even if they claim to be food grade, and not leaving plastic water bottles in the sun.

Perhaps the biggest thing people can do is rely on plastics less. Reducing overall consumption will reduce plastic pollution, and so reduce microplastics sloughing into the air and water.

Leslie recommends functional substitution: “Before you purchase something, think if you really need it, and if it needs to be plastic.”

Westerbos remains hopeful that researchers and scientists from around the world can come together to find a solution. “We need all the brainpower we have to connect and work together to find a substitute to plastic that is not toxic and doesn’t last [in the environment] as long as plastic does,” she says.

In mice, anxiety isn’t all in the head. It can start in the heart

When you’re stressed and anxious, you might feel your heart race. Is your heart racing because you’re afraid? Or does your speeding heart itself contribute to your anxiety? Both could be true, a new study in mice suggests.

By artificially increasing the heart rates of mice, scientists were able to increase anxiety-like behaviors — ones that the team then calmed by turning off a particular part of the brain. The study, published in the March 9 Nature, shows that in high-risk contexts, a racing heart could go to your head and increase anxiety. The findings could offer a new angle for studying and, potentially, treating anxiety disorders.
The idea that body sensations might contribute to emotions in the brain goes back at least to one of the founders of psychology, William James, says Karl Deisseroth, a neuroscientist at Stanford University. In James’ 1890 book The Principles of Psychology, he put forward the idea that emotion follows what the body experiences. “We feel sorry because we cry, angry because we strike, afraid because we tremble,” James wrote.

The brain certainly can sense internal body signals, a phenomenon called interoception. But whether those sensations — like a racing heart — can contribute to emotion is difficult to prove, says Anna Beyeler, a neuroscientist at the French National Institute of Health and Medical Research in Bordeaux. She studies brain circuitry related to emotion and wrote a commentary on the new study but was not involved in the research. “I’m sure a lot of people have thought of doing these experiments, but no one really had the tools,” she says.

Deisseroth has spent his career developing those tools. He is one of the scientists who developed optogenetics — a technique that uses viruses to modify the genes of specific cells to respond to bursts of light (SN: 6/18/21; SN: 1/15/10). Scientists can use the flip of a light switch to activate or suppress the activity of those cells.
In the new study, Deisseroth and his colleagues used a light attached to a tiny vest over a mouse’s genetically engineered heart to change the animal’s heart rate. When the light was off, a mouse’s heart pumped at about 600 beats per minute. But when the team turned on a light that flashed at 900 beats per minutes, the mouse’s heartbeat followed suit. “It’s a nice reasonable acceleration, [one a mouse] would encounter in a time of stress or fear,” Deisseroth explains.

When the mice felt their hearts racing, they showed anxiety-like behavior. In risky scenarios — like open areas where a little mouse might be someone’s lunch — the rodents slunk along the walls and lurked in darker corners. When pressing a lever for water that could sometimes be coupled with a mild shock, mice with normal heart rates still pressed without hesitation. But mice with racing hearts decided they’d rather go thirsty.

“Everybody was expecting that, but it’s the first time that it has been clearly demonstrated,” Beyeler says.
The researchers also scanned the animals’ brains to find areas that might be processing the increased heart rate. One of the biggest signals, Deisseroth says, came from the posterior insula (SN: 4/25/16). “The insula was interesting because it’s highly connected with interoceptive circuitry,” he explains. “When we saw that signal, [our] interest was definitely piqued.”

Using more optogenetics, the team reduced activity in the posterior insula, which decreased the mice’s anxiety-like behaviors. The animals’ hearts still raced, but they behaved more normally, spending some time in open areas of mazes and pressing levers for water without fear.
A lot of people are very excited about the work, says Wen Chen, the branch chief of basic medicine research for complementary and integrative health at the National Center for Complementary and Integrative Health in Bethesda, Md. “No matter what kind of meetings I go into, in the last two days, everybody brought up this paper,” says Chen, who wasn’t involved in the research.

The next step, Deisseroth says, is to look at other parts of the body that might affect anxiety. “We can feel it in our gut sometimes, or we can feel it in our neck or shoulders,” he says. Using optogenetics to tense a mouse’s muscles, or give them tummy butterflies, might reveal other pathways that produce fearful or anxiety-like behaviors.

Understanding the link between heart and head could eventually factor into how doctors treat panic and anxiety, Beyeler says. But the path between the lab and the clinic, she notes, is much more convoluted than that of the heart to the head.