New, greener catalysts are built for speed

Platinum, one of the rarest and most expensive metals on Earth, may soon find itself out of a job. Known for its allure in engagement rings, platinum is also treasured for its ability to jump-start chemical reactions. It’s an excellent catalyst, able to turn standoffish molecules into fast friends. But Earth’s supply of the metal is limited, so scientists are trying to coax materials that aren’t platinum — aren’t even metals — into acting like they are.

For years, platinum has been offering behind-the-scenes hustle in catalytic converters, which remove harmful pollutants from auto exhaust. It’s also one of a handful of rare metals that move along chemical reactions in many well-established industries. And now, clean energy technology opens a new and growing market for the metal. Energy-converting devices like fuel cells being developed to power some types of electric vehicles rely on platinum’s catalytic properties to transform hydrogen into electricity. Even generating the hydrogen fuel itself depends on platinum.

Without a cheaper substitute for platinum, these clean energy technologies won’t be able to compete against fossil fuels, says Liming Dai, a materials scientist at Case Western Reserve University in Cleveland.

To reduce the pressure on platinum, Dai and others are engineering new materials that have the same catalytic powers as platinum and other metals — without the high price tag. Some researchers are replacing expensive metals with cheaper, more abundant building blocks, like carbon. Others are turning to biology, using catalysts perfected by years of evolution as inspiration. And when platinum really is best for a job, researchers are retooling how it is used to get more bang for the buck.
Moving right along
Catalysts are the unsung heroes of the chemical reactions that make human society tick. These molecular matchmakers are used in manufacturing plastics and pharmaceuticals, petroleum and coal processing and now clean energy technology. Catalysts are even inside our bodies, in the form of enzymes that break food into nutrients and help cells make energy.
During any chemical reaction, molecules break chemical bonds between their atomic building blocks and then make new bonds with different atoms — like swapping partners at a square dance. Sometimes, those partnerships are easy to break: A molecule has certain properties that let it lure away atoms from another molecule. But in stable partnerships, the molecules are content as they are. Left together for a very long period of time, a few might eventually switch partners. But there’s no mass frenzy of bond breaking and rebuilding.

Catalysts make this breaking and rebuilding happen more efficiently by lowering the activation energy — the threshold amount of energy needed to make a chemical reaction go. Starting and ending products stay the same; the catalyst just changes the path, building a paved highway to bypass a bumpy dirt road. With an easier route, molecules that might take years to react can do so in seconds instead. A catalyst doesn’t get used up in the reaction, though. Like a wingman, it incentivizes other molecules to react, and then it bows out.

A hydrogen fuel cell, for example, works by reacting hydrogen gas (H2) with oxygen gas (O2) to make water (H2O) and electricity. The fuel cell needs to break apart the atoms of the hydrogen and oxygen molecules and reshuffle them into new molecules. Without some assistance, the reshuffling happens very slowly. Platinum propels those reactions along.
Platinum works well in fuel cell reactions because it interacts just the right amount with both hydrogen and oxygen. That is, the platinum surface attracts the gas molecules, pulling them close together to speed along the reaction. But then it lets its handiwork float free. Chemists call that “turnover” — how efficiently a catalyst can draw in molecules, help them react, then send them back out into the world.

Platinum isn’t the only superstar catalyst. Other metals with similar chemical properties also get the job done — palladium, ruthenium and iridium, for example. But those elements are also expensive and hard to get. They are so good at what they do that it’s hard to find a substitute. But promising new options are in the works.
Carbon is key
Carbon is a particularly attractive alternative to precious metals like platinum because it’s cheap, abundant and can be assembled into many different structures.

Carbon atoms can arrange themselves into flat sheets of orderly hexagonal rings, like chicken wire. Rolling these chicken wire sheets — known as graphene — into hollow tubes makes carbon nanotubes, which are stronger than steel for their weight. But carbon-only structures don’t make great catalysts.

“Really pure graphene isn’t catalytically active,” says Huixin He, a chemist at Rutgers University in Newark, N.J. But replacing some of the carbon atoms in the framework with nitrogen, phosphorus or other atoms changes the way electric charge is distributed throughout the material. And that can make carbon behave more like a metal. For example, nitrogen atoms sprinkled like chocolate chips into the carbon structure draw negatively charged electrons away from the carbon atoms. The carbon atoms are left with a more positive charge, making them more attractive to the reaction that needs a nudge.

That movement of electrical charge is a prerequisite for a material to act as a catalyst, says Dai, who has pioneered the development of carbon-based, metal-free catalysts. His lab group demonstrated in 2009 in Science that clumps of nitrogen-containing carbon nanotubes aligned vertically — like a fistful of uncooked spaghetti — could stand in for platinum to help break apart oxygen inside fuel cells.
To perfect the technology, which he has patented, Dai has been swapping in different atoms in different combinations and experimenting with various carbon structures. Should the catalyst be a flat sheet of graphene or a forest of rolled up nanotubes, or some hybrid of both? Should it contain just nitrogen and carbon, or a smorgasbord of other elements, too? The answer depends on the specific application.

In 2015 in Science Advances, Dai demonstrated that nitrogen-studded nanotubes worked in acid-containing fuel cells, one of the most promising designs for electric vehicles.

Other researchers are playing their own riffs on the carbon concept. To produce graphene’s orderly structure requires just the right temperature and specific reaction conditions. Amorphous carbon materials — in which the atoms are randomly clumped together — can be easier to make, Rutgers’ He says.

In one experiment, He’s team started with liquid phytic acid, a substance made of carbon, oxygen and phosphorus. Microwaving the liquid for less than a minute transformed it into a sooty black powder that she describes as a sticky sort of sand.

“Phytic acid strongly absorbs microwave energy and changes it to heat so fast,” she says. The heat rearranges the atoms into a jumbled carbon structure studded with phosphorus atoms. Like the nitrogen atoms in Dai’s nanotubes, the phosphorus atoms changed the movement of electric charge through the material and made it catalytically active, He and colleagues reported last year in ACS Nano.

The sooty phytic acid–based catalyst could help move along a different form of clean energy: It sped up a reaction that turns a big, hard-to-use molecule found in cellulose — a tough, woody component of plants — into something that can react with other molecules. That product could then be used to make fuel or other chemicals. He is still tweaking the catalyst to make it work better.

He’s catalyst particles get mixed into the chemical reaction (and later need to be strained out). These more jumbled carbon structures with nitrogen or phosphorus sprinkled in can work in fuel cells, too — and, she says, they’re easier to make than graphene.

Enzyme-inspired energy
Rather than design new materials from the bottom up, some scientists are repurposing catalysts already used in nature: enzymes. Inside living things, enzymes are involved in everything from copying genetic material to breaking down food and nutrients.

Enzymes have a few advantages as catalysts, says M.G. Finn, a chemist at Georgia Tech. They tend to be very specific for a particular reaction, so they won’t waste much energy propelling undesired side reactions. And because they can evolve, enzymes can be tailored to meet different needs.

On their own, enzymes can be too fragile to use in industrial manufacturing, says Trevor Douglas, a chemist at Indiana University in Bloomington. For a solution, his team looked to viruses, which already package enzymes and other proteins inside protective cases.

“We can use these compartments to stabilize the enzymes, to protect them from things that might chew them up in the environment,” Douglas says. The researchers are engineering bacteria to churn out virus-inspired capsules that can be used as catalysts in a variety of applications.
His team mostly uses enzymes called hydrogenases, but other enzymes can work, too. The researchers put the genetic instructions for making the enzymes and for building a protective coating into Escherichia coli bacteria. The bacteria go into production mode, pumping out particles with the hydrogenase enzymes protected inside, Douglas and colleagues reported last year in Nature Chemistry. The protective coating keeps chunky enzymes contained, but lets the molecules they assist get in and out.

“What we’ve done is co-opt the biological processes,” Douglas says. “All we have to do is grow the bacteria and turn on these genes.” Bacteria, he points out, tend to grow quite easily. It’s a sustainable system, and one that’s easily tailored to different reactions by swapping out one enzyme for another.

The enzyme-containing particles can speed along generation of the hydrogen fuel, he has found. But there are still technical challenges: These catalysts last only a couple of days, and figuring out how to replace them inside a consumer device is hard.

Other scientists are using existing enzymes as templates for catalysts of their own design. The same family of hydrogenase enzymes that Douglas is packaging into capsules can be a launching point for lab-built catalysts that are even more efficient than their natural counterparts.

One of these hydrogenases has an iron core plus an amine — a nitrogen-containing string of atoms — hanging off. Just as the nitrogen worked into Dai’s carbon nanotubes affected the way electrons were distributed throughout the material, the amine changes the way the rest of the molecule acts as a catalyst.

Morris Bullock, a researcher at Pacific Northwest National Laboratory in Richland, Wash., is trying to figure out exactly how that interaction plays out. He and colleagues are building catalysts with cheap and abundant metals like iron and nickel at their core, paired with different types of amines. By systematically varying the metal core and the structure and position of the amine, they’re testing which combinations work best.

These amine-containing catalysts aren’t ready for prime time yet — Bullock’s team is focused on understanding how the catalysts work rather than on perfecting them for industry. But the findings provide a springboard for other scientists to push these catalysts toward commercialization.

Sticking with the metals
These new types of catalysts are promising — many of them can speed up reactions almost as well as a traditional platinum catalyst. But even researchers working on platinum alternatives agree that making sustainable and low-cost catalysts isn’t always as simple as removing the expensive and rare metals.

“The calculation of sustainability is not completely straightforward,” Finn says. Though he works with enzymes in his lab, he says, “a platinum-based catalyst that lasts for years is probably going to be more sustainable than an enzyme that degrades.” It might end up being cheaper in the long run, too. That’s why researchers working on these alternative catalysts are pushing to make their products more stable and longer-lasting.
“If you think about a catalyst, it’s really the atoms on the surface that participate in the reaction. Those in the bulk may just provide mechanical support or are just wasted,” says Younan Xia, a chemist at Georgia Tech. Xia is working on minimizing that waste.

One promising approach is to shape platinum into what Xia dubs “nanocages” — instead of a solid cube of metal, just the edges remain, like a frame.

It’s also why many scientists haven’t given up on metal. “I don’t think you can say, ‘Let’s do without metals,’ ” says James Clark, a chemist at the University of York in England. “Certain metals have a certain functionality that’s going to be very hard to replace.” But, he adds, there are ways to use metals more efficiently, such as using nanoparticle-sized pieces that have a higher surface area than a flat sheet, or strategically combining small amounts of a rare metal with cheaper, more abundant nickel or iron. Changing the structure of the material on a nanoscale level also can make a difference.

In one experiment, Xia started with cubes of a different rare metal, palladium. He coated the palladium cubes with a thin layer of platinum just a few atoms thick — a pretty straightforward process. Then, a chemical etched away the palladium inside, leaving a hollow platinum skeleton. Because the palladium is removed from the final product, it can be used again and again. And the nanocage structure leaves less unused metal buried inside than a large flat sheet or a solid cube, Xia reported in 2015 in Science.

Since then, Xia’s team has been developing more complex shapes for the nanocages. An icosahedron, a ball with 20 triangular faces, worked especially well. The slight disorder to the structure — the atoms don’t crystallize quite perfectly — helped make it four times as active as a commercial platinum catalyst. He has made similar cages out of other rare metals like rhodium that could work as catalysts for other reactions.

It’ll take more work before any of these new catalysts fully dethrone platinum and other precious metals. But once they do, that’ll leave more precious metals to use in places where they can truly shine.

Bacteria genes offer new strategy for sterilizing mosquitoes

A pair of bacterial genes may enable genetic engineering strategies for curbing populations of virus-transmitting mosquitoes.

Bacteria that make the insects effectively sterile have been used to reduce mosquito populations. Now, two research teams have identified genes in those bacteria that may be responsible for the sterility, the groups report online February 27 in Nature and Nature Microbiology.

“I think it’s a great advance,” says Scott O’Neill, a biologist with the Institute of Vector-Borne Disease at Monash University in Melbourne, Australia. People have been trying for years to understand how the bacteria manipulate insects, he says.
Wolbachia bacteria “sterilize” male mosquitoes through a mechanism called cytoplasmic incompatibility, which affects sperm and eggs. When an infected male breeds with an uninfected female, his modified sperm kill the eggs after fertilization. When he mates with a likewise infected female, however, her eggs remove the sperm modification and develop normally.

Researchers from Vanderbilt University in Nashville pinpointed a pair of genes, called cifA and cifB, connected to the sterility mechanism of Wolbachia. The genes are located not in the DNA of the bacterium itself, but in a virus embedded in its chromosome.

When the researchers took two genes from the Wolbachia strain found in fruit flies and inserted the pair into uninfected male Drosophila melanogaster, the flies could no longer reproduce with healthy females, says Seth Bordenstein, a coauthor of the study published in Nature. But modified uninfected male flies could successfully reproduce with Wolbachia-infected females, perfectly mimicking how the sterility mechanism functions naturally.

The ability of infected females to “rescue” the modified sperm reminded researchers at the Yale School of Medicine of an antidote’s reaction to a toxin.

They theorized that the gene pair consisted of a toxin gene, cidB, and an antidote gene, cidA. The researchers inserted the toxin gene into yeast, activated it, and saw that the yeast was killed. But when both genes were present and active, the yeast survived, says Mark Hochstrasser, a coauthor of the study in Nature Microbiology.
Hochstrasser’s team also created transgenic flies, but used the strain of Wolbachia that infects common Culex pipiens mosquitoes.

Inserting the two genes into males could be used to control populations of Aedes aegypti mosquitoes, which can carry diseases such as Zika and dengue.

The sterility effect from Wolbachia doesn’t always kill 100 percent of the eggs, says Bordenstein. Adding additional pairs of the genes to the bacteria could make the sterilization more potent, creating a “super Wolbachia.”

You could also avoid infecting the mosquitoes altogether, says Bordenstein. By inserting the two genes into uninfected males and releasing them into populations of wild mosquitoes, you could “essentially crash the population,” he says.

Hochstrasser notes that the second method is safer in case Wolbachia have any long-term negative effects.

O’Neill, who directs a research program called Eliminate Dengue that releases Wolbachia-infected mosquitoes, cautions against mosquito population control through genetic engineering because of public concerns about the technology. “We think it’s better that we focus on a natural alternative,” he says.

Earth’s mantle may be hotter than thought

Temperatures across Earth’s mantle are about 60 degrees Celsius higher than previously thought, a new experiment suggests. Such toasty temperatures would make the mantle runnier than earlier research suggested, a development that could help explain the details of how tectonic plates glide on top of the mantle, geophysicists report in the March 3 Science.

“Scientists have been arguing over the mantle temperature for decades,” says study coauthor Emily Sarafian, a geophysicist at the Woods Hole Oceanographic Institution in Massachusetts and at MIT. “Scientists will argue over 10 degree changes, so changing it by 60 degrees is quite a large jump.”
The mostly solid mantle sits between Earth’s crust and core and makes up around 84 percent of Earth’s volume. Heat from the mantle fuels volcanic eruptions and drives plate tectonics, but taking the mantle’s temperature is trickier than dropping a thermometer down a hole.

Scientists know from the paths of earthquake waves and from measures of how electrical charge moves through Earth that a boundary in the mantle exists a few dozen kilometers below Earth’s surface. Above that boundary, mantle rock can begin melting on its way up to the surface. By mimicking the extreme conditions in the deep Earth — squeezing and heating bits of mantle that erupt from undersea volcanoes or similar rocks synthesized in the lab — scientist can also determine the melting temperature of mantle rock. Using these two facts, scientists have estimated that temperatures at the boundary depth below Earth’s oceans are around 1314° C to 1464° C when adjusted to surface pressure.

But the presence of water in the collected mantle bits, primarily peridotite rock, which makes up much of the upper mantle, has caused problems for researchers’ calculations. Water can drastically lower the melting point of peridotite, but researchers can’t prevent the water content from changing over time. In previous experiments, scientists tried to completely dry peridotite samples and then manually correct for measured mantle water levels in their calculations. The scientists, however, couldn’t tell for sure if the samples were water-free.

The measurement difficulties stem from the fact that peridotite is a mix of the minerals olivine and pyroxene, and the mineral grains are too small to experiment with individually. Sarafian and colleagues overcame this challenge by inserting spheres of pure olivine large enough to study into synthetic peridotite samples. These spheres exchanged water with the surrounding peridotite until they had the same dampness, and so could be used for water content measurements.

Using this technique, the researchers found that the “dry” peridotite used in previous experiments wasn’t dry at all. In fact, the water content was spot on for the actual wetness of the mantle. “By assuming the samples are dry, then correcting for mantle water content, you’re actually overcorrecting,” Sarafian says.
The new experiment suggests that, if adjusted to surface pressure, the mantle under the eastern Pacific Ocean where two tectonic plates diverge, for example, would be around 1410°, up from 1350°. A hotter mantle is less viscous and more malleable, Sarafian says. Scientists have long been puzzled about some of the specifics of plate tectonics, such as to what extent the mantle resists the movement of the overlying plate. That resistance depends in part on the mix of rock, temperature and how melted the rock is at the boundary between the two layers (SN: 3/7/15, p. 6). This new knowledge could give researchers more accurate information on those details.

The revised temperature is only for the melting boundary in the mantle, so “it’s not the full story,” notes Caltech geologist Paul Asimow, who wrote a perspective on the research in the same issue of Science. He agrees that the team’s work provides a higher and more accurate estimate of that adjusted temperature, but he doesn’t think the researchers should assume temperatures elsewhere in the mantle would be boosted by a similar amount. “I’m not so sure about that,” he says. “We need further testing of mantle temperatures.”

Ancient dental plaque tells tales of Neandertal diet and disease

Dental plaque preserved in fossilized teeth confirms that Neandertals were flexible eaters and may have self-medicated with an ancient equivalent of aspirin.

DNA recovered from calcified plaque on teeth from four Neandertal individuals suggest that those from the grasslands around Beligum’s Spy cave ate woolly rhinoceros and wild sheep, while their counterparts from the forested El Sidrón cave in Spain consumed a menu of moss, mushrooms and pine nuts.

The evidence bolsters an argument that Neandertals’ diets spanned the spectrum of carnivory and herbivory based on the resources available to them, Laura Weyrich, a microbiologist at the University of Adelaide in Australia, and her colleagues report March 8 in Nature.

The best-preserved Neandertal remains were from a young male from El Sidrón whose teeth showed signs of an abscess. DNA from a diarrhea-inducing stomach bug and several gum disease pathogens turned up in his plaque. Genetic material from poplar trees, which contain the pain-killing aspirin ingredient salicylic acid, and a plant mold that makes the antibiotic penicillin hint that he may have used natural medication to ease his ailments.

The researchers were even able to extract an almost-complete genetic blueprint, or genome, for one ancient microbe, Methanobrevibacter oralis. At roughly 48,000 years old, it’s the oldest microbial genome sequenced, the researchers report.

Shocking stories tell tale of London Zoo’s founding

When Tommy the chimpanzee first came to London’s zoo in the fall of 1835, he was dressed in an old white shirt.

Keepers gave him a new frock and a sailor hat and set him up in a cozy spot in the kitchen to weather the winter. Visitors flocked to get a look at the little ape roaming around the keepers’ lodge, curled up in the cook’s lap or tugging on her skirt like a toddler. Tommy was a hit — the zoo’s latest star.
Six months later, he was dead.

Tommy’s sorrowful story comes near the middle of Isobel Charman’s latest book, The Zoo, a tale of the founding of the Gardens of the Zoological Society of London, known today as the London Zoo. The book lays out a grand saga of human ambition and audacity, but it’s the animals’ stories — their lives and deaths and hardships — that catch hold of readers and don’t let go.

Charman, a writer and documentary producer, resurrects almost three decades of history, beginning in 1824, when the zoo was still just a fantastical idea: a public menagerie of animals “that would allow naturalists to observe the creatures scientifically.”

It was a long, hard path to that lofty dream, though: In the zoo’s early years, exotic creatures were nearly impossible to keep alive. Charman unloads a numbing litany of animal misery that batters the reader like a boxer working over a speed bag. Kangaroos hurl themselves at fences, monkeys attack each other in cramped, dark cages and an elephant named Jack breaks a tusk while smashing up his den. Charman’s parade of horrors boggles the mind, as does the sheer number of animals carted from all corners of the world to the cold, wet enclosures of the zoo.

Her story is an incredible piece of detective work, told through the eyes of many key players and famous figures, including Charles Darwin. Charman plumbs details from newspaper articles, diaries, census records and weather reports to craft a narrative of the time. She portrays a London that’s gritty, grimy and cold, where some aspects of science and medicine seem stuck in the Dark Ages. Doctors still used leeches to bleed patients, and no one had a clue how to care for zoo animals.
Zoo workers certainly tried — applying liniment to sores on a lion’s legs, prescribing opium for a sick puma and treating a constipated llama with purgatives. But nothing seemed to stop the endless conveyor belt that brought living animals in and carried dead ones out. Back then, caring for zoo animals was mostly a matter of trial and error, Charman writes. What seems laughably obvious now — animals need shelter in winter, cakes and buns aren’t proper food for elephants — took zookeepers years to figure out.

Over time the zoo adapted, making gradual changes that eventually improved the lives of its inhabitants. It seemed to morph, finally, from mostly “a playground of the privileged,” as Charman calls it, to a reliable place for scientific study, where curious people could learn about the “wild and wonderful” creatures within.

One of those people was Darwin, whose ideas about human origins clicked into place after he spent time with Jenny the orangutan. Her teasing relationship with her keeper, apparent understanding of language and utter likeness to people helped convince Darwin that humankind was just another branch on the tree of life, Charman writes.
Darwin’s work on the subject wouldn’t be published for decades, but in the meantime, the zoo’s early improvements seemed to have stuck. Over 30 years after Tommy the chimpanzee died in his keeper’s arms, a hippopotamus gave birth to “the first captive-bred hippo to be reared by its mother,” Charman notes. The baby hippo not only survived — she lived for 36 years.

Readers may wonder how standards for animal treatment have changed over time. But Charman sticks to history, rather than examining contrasts to modern zoos. Still, what she offers is gripping enough on its own: a bold, no-holds-barred look at one zoo’s beginning. It was impressive, no doubt. But it wasn’t pretty.

Random mutations play large role in cancer, study finds

Researchers have identified new enemies in the war on cancer: ones that are already inside cells and that no one can avoid.

Random mistakes made as stem cells divide are responsible for about two-thirds of the mutations in cancer cells, researchers from Johns Hopkins University report in the March 24 Science. Across all cancer types, environment and lifestyle factors, such as smoking and obesity, contribute 29 percent of cancer mutations, and 5 percent are inherited.
That finding challenges the common wisdom that cancer is the product of heredity and the environment. “There’s a third cause and this cause of mutations is a major cause,” says cancer geneticist Bert Vogelstein.

Such random mutations build up over time and help explain why cancer strikes older people more often. Knowing that the enemy will strike from within even when people protect themselves against external threats indicates that early cancer detection and treatment deserve greater attention than they have previously gotten, Vogelstein says.

Vogelstein and biomathematician Cristian Tomasetti proposed in 2015 that random mutations are the reason some organs are more prone to cancer than others. For instance, stem cells are constantly renewing the intestinal lining of the colon, which develops tumors more often than the brain, where cell division is uncommon. That report was controversial because it was interpreted as saying that most cancers are the result of “bad luck.” The analysis didn’t include breast and prostate cancers. Factoring in those common cancers might change the results, some scientists said. And because the researchers looked at only cancer within the United States, critics charged that the finding might not hold up when considering places around the world where different environmental factors, such as infections, affect cancer development.

In the new study, Vogelstein, Tomasetti and Hopkins colleague Lu Li examined data from 69 countries about 17 types of cancer, this time including breast and prostate. Again, the researchers found a strong link between cancer and tissues with lots of dividing stem cells. The team also used DNA data and epidemiological studies to calculate the proportions of mutations in cancer cells caused by heredity or environmental and lifestyle factors. Remaining mutations were attributed to random errors — including typos, insertions or deletions of genes, epigenetic changes (alterations of chemical tags on DNA or proteins that affect gene activity) and gene rearrangements. Such errors unavoidably happen when cells divide.
Usually cancer results after a cell accumulates many mutations. Some people will have accumulated a variety of cancer-associated mutations but won’t get cancer until some final insult goads the cell into becoming malignant (SN: 12/26/15, p. 28). For some tumors, all the mutations may be the hit-and-miss result of cell division mistakes. There’s no way to evade those cancers, Vogelstein says. Other malignancies may spring up as a result of different combinations of heritable, environmental and random mutations. Lung cancer and other tumor types that are strongly associated with environmentally caused mutations could be eluded by avoiding the carcinogen, even when most of the mutations that spur cancer growth arise from random mistakes, Tomasetti says.

“They are venturing into new territory,” says Giovanni Parmigiani, a biostatistician at the Harvard T.H. Chan School of Public Health. Tomasetti, Li and Vogelstein are the first to rigorously estimate the contribution of environment, heredity and DNA-copying errors to cancer, he says. “Perhaps the estimates will improve in the future, but theirs seems like a very solid starting point.”

Now that the Hopkins researchers have pointed it out, the relationship between dividing cells and cancer seems obvious, says biological physicist Bartlomiej Waclaw of the University of Edinburgh. “I don’t think that the existence of this correlation is surprising,” he says. “What’s surprising is that it’s not stronger.”

Some tissues develop cancers more or less often than other tissues with a similar number of cell divisions, Waclaw and Martin Nowak of Harvard University pointed out in a commentary on the Hopkins study, published in the same issue of Science. That suggests some organs are better at nipping cancer in the bud. Discovering how those tissues avoid cancer could lead to new ways to prevent tumors elsewhere in the body, Waclaw suggests.

Other researchers say the Hopkins team is guilty of faulty reasoning. “They are assuming that just because tissues which have high stem cell turnover also have high cancer rates, that one is causing the other,” says cancer researcher Anne McTiernan of the Fred Hutchinson Cancer Research Center in Seattle. “In this new paper, they’ve added data from other countries but haven’t gotten away from this biased thinking.”

Tomasetti and colleagues based their calculations on data from Cancer Research UK that suggest that 42 percent of cancers are preventable. Preventable cancers are ones for which people could avoid a risk factor, such as unprotected sun exposure or tanning bed use, or take positive steps to lower cancer risks, such as exercising regularly and eating fruits and vegetables. But those estimates may not be accurate, McTiernan says. “In reality, it’s very difficult to measure environmental exposures, so our estimates of preventability are likely very underestimated.”

To attribute so many cancer mutations to chance seems to negate public health messages, Waclaw says, and some people may find the calculation that 66 percent of cancer-associated mutations are unavoidable disturbing because they spend a lot of time trying to prevent cancer. “It’s important to consider the randomness, or bad luck, that comes with cellular division,” he says.

In fact, Tomasetti and Vogelstein stress that their findings are compatible with cancer-prevention recommendations. Avoiding smoking, tanning beds, obesity and other known carcinogens can prevent the “environmental” mutations that combine with inherited and random mutations to tip cells into cancer. Without those final straws loaded from environmental exposures, tumors may be averted or greatly delayed.

People with cancer may be able to take some comfort from the study, says Elaine Mardis, a cancer genomicist at the Nationwide Children’s Hospital in Columbus, Ohio. “Perhaps the positive message here is that, other than known risk factors, such as smoking, radiation exposure and obesity, there is a component of cancer that is simply a consequence of being human.”

Extreme gas loss dried out Mars, MAVEN data suggest

The Martian atmosphere definitely had more gas in the past.

Data from NASA’s MAVEN spacecraft indicate that the Red Planet has lost most of the gas that ever existed in its atmosphere. The results, published in the March 31 Science, are the first to quantify how much gas has been lost with time and offer clues to how Mars went from a warm, wet place to a cold, dry one.

Mars is constantly bombarded by charged particles streaming from the sun. Without a protective magnetic field to deflect this solar wind, the planet loses about 100 grams of its now thin atmosphere every second (SN: 12/12/15, p. 31). To determine how much atmosphere has been lost during the planet’s lifetime, MAVEN principal investigator Bruce Jakosky of the University of Colorado Boulder and colleagues measured and compared the abundances of two isotopes of argon at different altitudes in the Martian atmosphere. Using those measurements and an assumption about the amounts of the isotopes in the planet’s early atmosphere, the team estimates that about two-thirds of all of Mars’ argon gas has been ejected into space. Extrapolating from the argon data, the researchers also determined that the majority of carbon dioxide that the Martian atmosphere ever had also was kicked into space by the solar wind.

A thicker atmosphere filled with carbon dioxide and other greenhouse gases could have insulated early Mars and kept it warm enough for liquid water and possibly life. Losing an extreme amount of gas, as the results suggest, may explain how the planet morphed from lush and wet to barren and icy, the researchers write.

Language heard, but never spoken, by young babies bestows a hidden benefit

The way babies learn to speak is nothing short of breathtaking. Their brains are learning the differences between sounds, rehearsing mouth movements and mastering vocabulary by putting words into meaningful context. It’s a lot to fit in between naps and diaper changes.

A recent study shows just how durable this early language learning is. Dutch-speaking adults who were adopted from South Korea as preverbal babies held on to latent Korean language skills, researchers report online January 18 in Royal Society Open Science. In the first months of their lives, these people had already laid down the foundation for speaking Korean — a foundation that persisted for decades undetected, only revealing itself later in careful laboratory tests.

Researchers tested how well people could learn to identify and speak tricky Korean sounds. “For Korean listeners, these sounds are easy to distinguish, but for second-language learners they are very difficult to master,” says study coauthor Mirjam Broersma, a psycholinguist of Radboud University in Nijmegen, Netherlands. For instance, a native Dutch speaker would listen to three distinct Korean sounds and hear only the same “t” sound.

Broersma and her colleagues compared the language-absorbing skills of a group of 29 native Dutch speakers to 29 South Korea-born Dutch speakers. Half of the adoptees moved to the Netherlands when they were older than 17 months — ages at which the kids had probably begun talking. The other half were adopted as preverbal babies younger than 6 months. As a group, the South Korea-born adults outperformed the native-born Dutch adults, more easily learning both to recognize and speak the Korean sounds.

This advantage held when the researchers looked at only adults who had been adopted before turning 6 months old. “Even those who were only 3 to 5 months old at the time of adoption already knew a lot about the sounds of their birth language, enough even to help them relearn those sounds decades later,” Broersma says.

Uncovering this latent skill decades after it had been imprinted in babies younger than 6 months was thrilling, Broersma says. Many researchers had assumed that infants start to learn the sounds of their first language later, around 6 to 8 months after birth. “Our results show that that assumption must have been wrong,” she says.

It’s possible that some of these language skills were acquired during pregnancy, as other studies have hinted. Because the current study didn’t include babies who were adopted immediately after birth, the results can’t say whether language heard during gestation would have had an influence on later language skills. Still, the results suggest that babies start picking up language as soon as they possibly can.

Cells’ stunning complexity on display in a new online portal

Computers don’t have eyes, but they could revolutionize the way scientists visualize cells.

Researchers at the Allen Institute for Cell Science in Seattle have devised 3-D representations of cells, compiled by computers learning where thousands of real cells tuck their component parts.

Most drawings of cells in textbooks come from human interpretations gleaned by looking at just a few dead cells at a time. The new Allen Cell Explorer, which premiered online April 5, presents 3-D images of genetically identical stem cells grown in lab dishes (composite, above), revealing a huge variety of structural differences.
Each cell comes from a skin cell that was reprogrammed into a stem cell. Important proteins were tagged with fluorescent molecules so researchers could keep tabs on the cell membrane, DNA-containing nucleus, energy-generating mitochondria, microtubules and other cell parts. Using the 3-D images, computer programs learned where the cellular parts are in relation to each other. From those rules, the programs can generate predictive transparent models of a cell’s structure (below). The new views, which can cap­ture cells at different time points, may offer clues into their inner workings.
The project’s tools are available for other researchers to use on various types of cells. Insights gained from the explorations might lead to a better understanding of human development, cancer, health and diseases.

Researchers have already learned from the project that stem cells aren’t the shapeless blobs they might appear to be, says Susanne Rafelski, a quantitative cell biologist at the Allen Institute. Instead, the stem cells have a definite bottom and top, a proposed structure that’s now confirmed by the combined cell data, Rafelski says. A solid foundation of skeleton proteins forms at the bottom. The nucleus is usually found in the cell’s center. Microtubules bundle together into large fibers that tend to radiate from the top of the cell toward the bottom. During cell division, microtubules form structures called bipolar spindles that are necessary to divvy up DNA.
One surprise was that the membrane surrounding the nucleus gets ruffled, but never completely disappears, during cell division. Near the top of the cell, above the nucleus, stem cells store tubelike mitochondria much the way plumbing and electrical wires are tucked into ceilings. The tubular mitochondria were notable because some researchers thought that since stem cells don’t require much energy, the organelles might separate into small, individual units.

Old ways of observing cells were like trying to get to know a city by looking at a map, Rafelski says. The cell explorer is more like a documentary of the lives of the citizens.

There’s still a lot we don’t know about the proton

Nuclear physicist Evangeline Downie hadn’t planned to study one of the thorniest puzzles of the proton.

But when opportunity knocked, Downie couldn’t say no. “It’s the proton,” she exclaims. The mysteries that still swirl around this jewel of the subatomic realm were too tantalizing to resist. The plentiful particles make up much of the visible matter in the universe. “We’re made of them, and we don’t understand them fully,” she says.

Many physicists delving deep into the heart of matter in recent decades have been lured to the more exotic and unfamiliar subatomic particles: mesons, neutrinos and the famous Higgs boson — not the humble proton.
But rather than chasing the rarest of the rare, scientists like Downie are painstakingly scrutinizing the proton itself with ever-higher precision. In the process, some of these proton enthusiasts have stumbled upon problems in areas of physics that scientists thought they had figured out.

Surprisingly, some of the particle’s most basic characteristics are not fully pinned down. The latest measurements of its radius disagree with one another by a wide margin, for example, a fact that captivated Downie. Likewise, scientists can’t yet explain the source of the proton’s spin, a basic quantum property. And some physicists have a deep but unconfirmed suspicion that the seemingly eternal particles don’t live forever — protons may decay. Such a decay is predicted by theories that unite disparate forces of nature under one grand umbrella. But decay has not yet been witnessed.

Like the base of a pyramid, the physics of the proton serves as a foundation for much of what scientists know about the behavior of matter. To understand the intricacies of the universe, says Downie, of George Washington University in Washington, D.C., “we have to start with, in a sense, the simplest system.”

Sizing things up
For most of the universe’s history, protons have been VIPs — very important particles. They formed just millionths of a second after the Big Bang, once the cosmos cooled enough for the positively charged particles to take shape. But protons didn’t step into the spotlight until about 100 years ago, when Ernest Rutherford bombarded nitrogen with radioactively produced particles, breaking up the nuclei and releasing protons.

A single proton in concert with a single electron makes up hydrogen — the most plentiful element in the universe. One or more protons are present in the nucleus of every atom. Each element has a unique number of protons, signified by an element’s atomic number. In the core of the sun, fusing protons generate heat and light needed for life to flourish. Lone protons are also found as cosmic rays, whizzing through space at breakneck speeds, colliding with Earth’s atmosphere and producing showers of other particles, such as electrons, muons and neutrinos.

In short, protons are everywhere. Even minor tweaks to scientists’ understanding of the minuscule particle, therefore, could have far-reaching implications. So any nagging questions, however small in scale, can get proton researchers riled up.

A disagreement of a few percent in measurements of the proton’s radius has attracted intense interest, for example. Until several years ago, scientists agreed: The proton’s radius was about 0.88 femtometers, or 0.88 millionths of a billionth of a meter — about a trillionth the width of a poppy seed.
But that neat picture was upended in the span of a few hours, in May 2010, at the Precision Physics of Simple Atomic Systems conference in Les Houches, France. Two teams of scientists presented new, more precise measurements, unveiling what they thought would be the definitive size of the proton. Instead the figures disagreed by about 4 percent (SN: 7/31/10, p. 7). “We both expected that we would get the same number, so we were both surprised,” says physicist Jan Bernauer of MIT.

By itself, a slight revision of the proton’s radius wouldn’t upend physics. But despite extensive efforts, the groups can’t explain why they get different numbers. As researchers have eliminated simple explanations for the impasse, they’ve begun wondering if the mismatch could be the first hint of a breakdown that could shatter accepted tenets of physics.

The two groups each used different methods to size up the proton. In an experiment at the MAMI particle accelerator in Mainz, Germany, Bernauer and colleagues estimated the proton’s girth by measuring how much electrons’ trajectories were deflected when fired at protons. That test found the expected radius of about 0.88 femtometers (SN Online: 12/17/10).

But a team led by physicist Randolf Pohl of the Max Planck Institute of Quantum Optics in Garching, Germany, used a new, more precise method. The researchers created muonic hydrogen, a proton that is accompanied not by an electron but by a heftier cousin — a muon.

In an experiment at the Paul Scherrer Institute in Villigen, Switzerland, Pohl and collaborators used lasers to bump the muons to higher energy levels. The amount of energy required depends on the size of the proton. Because the more massive muon hugs closer to the proton than electrons do, the energy levels of muonic hydrogen are more sensitive to the proton’s size than ordinary hydrogen, allowing for measurements 10 times as precise as electron-scattering measurements.

Pohl’s results suggested a smaller proton radius, about 0.841 femtometers, a stark difference from the other measurement. Follow-up measurements of muonic deuterium — which has a proton and a neutron in its nucleus — also revealed a smaller than expected size, he and collaborators reported last year in Science. Physicists have racked their brains to explain why the two measurements don’t agree. Experimental error could be to blame, but no one can pinpoint its source. And the theoretical physics used to calculate the radius from the experimental data seems solid.

Now, more outlandish possibilities are being tossed around. An unexpected new particle that interacts with muons but not electrons could explain the difference (SN: 2/23/13, p. 8). That would be revolutionary: Physicists believe that electrons and muons should behave identically in particle interactions. “It’s a very sacred principle in theoretical physics,” says John Negele, a theoretical particle physicist at MIT. “If there’s unambiguous evidence that it’s been broken, that’s really a fundamental discovery.”

But established physics theories die hard. Shaking the foundations of physics, Pohl says, is “what I dream of, but I think that’s not going to happen.” Instead, he suspects, the discrepancy is more likely to be explained through minor tweaks to the experiments or the theory.

The alluring mystery of the proton radius reeled Downie in. During conversations in the lab with some fellow physicists, she learned of an upcoming experiment that could help settle the issue. The experiment’s founders were looking for collaborators, and Downie leaped on the bandwagon. The Muon Proton Scattering Experiment, or MUSE, to take place at the Paul Scherrer Institute beginning in 2018, will scatter both electrons and muons off of protons and compare the results. It offers a way to test whether the two particles behave differently, says Downie, who is now a spokesperson for MUSE.

A host of other experiments are in progress or planning stages. Scientists with the Proton Radius Experiment, or PRad, located at Jefferson Lab in Newport News, Va., hope to improve on Bernauer and colleagues’ electron-scattering measurements. PRad researchers are analyzing their data and should have a new number for the proton radius soon.

But for now, the proton’s identity crisis, at least regarding its size, remains. That poses problems for ultrasensitive tests of one of physicists’ most essential theories. Quantum electrodynamics, or QED, the theory that unites quantum mechanics and Albert Einstein’s special theory of relativity, describes the physics of electromagnetism on small scales. Using this theory, scientists can calculate the properties of quantum systems, such as hydrogen atoms, in exquisite detail — and so far the predictions match reality. But such calculations require some input — including the proton’s radius. Therefore, to subject the theory to even more stringent tests, gauging the proton’s size is a must-do task.
Spin doctors
Even if scientists eventually sort out the proton’s size snags, there’s much left to understand. Dig deep into the proton’s guts, and the seemingly simple particle becomes a kaleidoscope of complexity. Rattling around inside each proton is a trio of particles called quarks: one negatively charged “down” quark and two positively charged “up” quarks. Neutrons, on the flip side, comprise two down quarks and one up quark.

Yet even the quark-trio picture is too simplistic. In addition to the three quarks that are always present, a chaotic swarm of transient particles churns within the proton. Evanescent throngs of additional quarks and their antimatter partners, antiquarks, continually swirl into existence, then annihilate each other. Gluons, the particle “glue” that holds the proton together, careen between particles. Gluons are the messengers of the strong nuclear force, an interaction that causes quarks to fervently attract one another.
As a result of this chaos, the properties of protons — and neutrons as well — are difficult to get a handle on. One property, spin, has taken decades of careful investigation, and it’s still not sorted out. Quantum particles almost seem to be whirling at blistering speed, like the Earth rotating about its axis. This spin produces angular momentum — a quality of a rotating object that, for example, keeps a top revolving until friction slows it. The spin also makes protons behave like tiny magnets, because a rotating electric charge produces a magnetic field. This property is the key to the medical imaging procedure called magnetic resonance imaging, or MRI.

But, like nearly everything quantum, there’s some weirdness mixed in: There’s no actual spinning going on. Because fundamental particles like quarks don’t have a finite physical size — as far as scientists know — they can’t twirl. Despite the lack of spinning, the particles still behave like they have a spin, which can take on only certain values: integer multiples of 1/2.

Quarks have a spin of 1/2, and gluons a spin of 1. These spins combine to help yield the proton’s total spin. In addition, just as the Earth is both spinning about its own axis and orbiting the sun, quarks and gluons may also circle about the proton’s center, producing additional angular momentum that can contribute to the proton’s total spin.

Somehow, the spin and orbital motion of quarks and gluons within the proton combine to produce its spin of 1/2. Originally, physicists expected that the explanation would be simple. The only particles that mattered, they thought, were the proton’s three main quarks, each with a spin of 1/2. If two of those spins were oriented in opposite directions, they could cancel one another out to produce a total spin of 1/2. But experiments beginning in the 1980s showed that “this picture was very far from true,” says theoretical high-energy physicist Juan Rojo of Vrije University Amsterdam. Surprisingly, only a small fraction of the spin seemed to be coming from the quarks, befuddling scientists with what became known as the “spin crisis” (SN: 9/6/97, p. 158). Neutron spin was likewise enigmatic.

Scientists’ next hunch was that gluons contribute to the proton’s spin. “Verifying this hypothesis was very difficult,” Rojo says. It required experimental studies at the Relativistic Heavy Ion Collider, RHIC, a particle accelerator at Brookhaven National Laboratory in Upton, N.Y.

In these experiments, scientists collided protons that were polarized: The two protons’ spins were either aligned or pointed in opposite directions. Researchers counted the products of those collisions and compared the results for aligned and opposing spins. The results revealed how much of the spin comes from gluons. According to an analysis by Rojo and colleagues, published in Nuclear Physics B in 2014, gluons make up about 35 percent of the proton’s spin. Since the quarks make up about 25 percent, that leaves another 40 percent still unaccounted for.

“We have absolutely no idea how the entire spin is made up,” says nuclear physicist Elke-Caroline Aschenauer of Brookhaven. “We maybe have understood a small fraction of it.” That’s because each quark or gluon carries a certain fraction of the proton’s energy, and the lowest energy quarks and gluons cannot be spotted at RHIC. A proposed collider, called the Electron-Ion Collider (location to be determined), could help scientists investigate the neglected territory.

The Electron-Ion Collider could also allow scientists to map the still-unmeasured orbital motion of quarks and gluons, which may contribute to the proton’s spin as well.
An unruly force
Experimental physicists get little help from theoretical physics when attempting to unravel the proton’s spin and its other perplexities. “The proton is not something you can calculate from first principles,” Aschenauer says. Quantum chromo-dynamics, or QCD — the theory of the quark-corralling strong force transmitted by gluons — is an unruly beast. It is so complex that scientists can’t directly solve the theory’s equations.

The difficulty lies with the behavior of the strong force. As long as quarks and their companions stick relatively close, they are happy and can mill about the proton at will. But absence makes the heart grow fonder: The farther apart the quarks get, the more insistently the strong force pulls them back together, containing them within the proton. This behavior explains why no one has found a single quark in isolation. It also makes the proton’s properties especially difficult to calculate. Without accurate theoretical calculations, scientists can’t predict what the proton’s radius should be, or how the spin should be divvied up.
To simplify the math of the proton, physicists use a technique called lattice QCD, in which they imagine that the world is made of a grid of points in space and time (SN: 8/7/04, p. 90). A quark can sit at one point or another in the grid, but not in the spaces in between. Time, likewise, proceeds in jumps. In such a situation, QCD becomes more manageable, though calculations still require powerful supercomputers.

Lattice QCD calculations of the proton’s spin are making progress, but there’s still plenty of uncertainty. In 2015, theoretical particle and nuclear physicist Keh-Fei Liu and colleagues calculated the spin contributions from the gluons, the quarks and the quarks’ angular momentum, reporting the results in Physical Review D. By their calculation, about half of the spin comes from the quarks’ motion within the proton, about a quarter from the quarks’ spin, with the last quarter or so from the gluons. The numbers don’t exactly match the experimental measurements, but that’s understandable — the lattice QCD numbers are still fuzzy. The calculation relies on various approximations, so it “is not cast in stone,” says Liu, of the University of Kentucky in Lexington.

Death of a proton
Although protons seem to live forever, scientists have long questioned that immortality. Some popular theories predict that protons decay, disintegrating into other particles over long timescales. Yet despite extensive searches, no hint of this demise has materialized.

A class of ideas known as grand unified theories predict that protons eventually succumb. These theories unite three of the forces of nature, creating a single framework that could explain electromagnetism, the strong nuclear force and the weak nuclear force, which is responsible for certain types of radioactive decay. (Nature’s fourth force, gravity, is not yet incorporated into these models.) Under such unified theories, the three forces reach equal strengths at extremely high energies. Such energetic conditions were present in the early universe — well before protons formed — just a trillionth of a trillionth of a trillionth of a second after the Big Bang. As the cosmos cooled, those forces would have separated into three different facets that scientists now observe.
“We have a lot of circumstantial evidence that something like unification must be happening,” says theoretical high-energy physicist Kaladi Babu of Oklahoma State University in Stillwater. Beyond the appeal of uniting the forces, grand unified theories could explain some curious coincidences of physics, such as the fact that the proton’s electric charge precisely balances the electron’s charge. Another bonus is that the particles in grand unified theories fill out a family tree, with quarks becoming the kin of electrons, for example.

Under these theories, a decaying proton would disintegrate into other particles, such as a positron (the antimatter version of an electron) and a particle called a pion, composed of a quark and an antiquark, which itself eventually decays. If such a grand unified theory is correct and protons do decay, the process must be extremely rare — protons must live a very long time, on average, before they break down. If most protons decayed rapidly, atoms wouldn’t stick around long either, and the matter that makes up stars, planets — even human bodies — would be falling apart left and right.

Protons have existed for 13.8 billion years, since just after the Big Bang. So they must live exceedingly long lives, on average. But the particles could perish at even longer timescales. If they do, scientists should be able to monitor many particles at once to see a few protons bite the dust ahead of the curve (SN: 12/15/79, p. 405). But searches for decaying protons have so far come up empty.

Still, the search continues. To hunt for decaying protons, scientists go deep underground, for example, to a mine in Hida, Japan. There, at the Super-Kamiokande experiment (SN: 2/18/17, p. 24), they monitor a giant tank of water — 50,000 metric tons’ worth — waiting for a single proton to wink out of existence. After watching that water tank for nearly two decades, the scientists reported in the Jan. 1 Physical Review D that protons must live longer than 1.6 × 1034 years on average, assuming they decay predominantly into a positron and a pion.

Experimental limits on the proton lifetime “are sort of painting the theorists into a corner,” says Ed Kearns of Boston University, who searches for proton decay with Super-K. If a new theory predicts a proton lifetime shorter than what Super-K has measured, it’s wrong. Physicists must go back to the drawing board until they come up with a theory that agrees with Super-K’s proton-decay drought.

Many grand unified theories that remain standing in the wake of Super-K’s measurements incorporate supersymmetry, the idea that each known particle has another, more massive partner. In such theories, those new particles are additional pieces in the puzzle, fitting into an even larger family tree of interconnected particles. But theories that rely on supersymmetry may be in trouble. “We would have preferred to see supersymmetry at the Large Hadron Collider by now,” Babu says, referring to the particle accelerator located at the European particle physics lab, CERN, in Geneva, which has consistently come up empty in supersymmetry searches since it turned on in 2009 (SN: 10/1/16, p. 12).

But supersymmetric particles could simply be too massive for the LHC to find. And some grand unified theories that don’t require supersymmetry still remain viable. Versions of these theories predict proton lifetimes within reach of an upcoming generation of experiments. Scientists plan to follow up Super-K with Hyper-K, with an even bigger tank of water. And DUNE, the Deep Underground Neutrino Experiment, planned for installation in a former gold mine in Lead, S.D., will use liquid argon to detect protons decaying into particles that the water detectors might miss.
If protons do decay, the universe will become frail in its old age. According to Super-K, sometime well after its 1034 birthday, the cosmos will become a barren sea of light. Stars, planets and life will disappear. If seemingly dependable protons give in, it could spell the death of the universe as we know it.

Although protons may eventually become extinct, proton research isn’t going out of style anytime soon. Even if scientists resolve the dilemmas of radius, spin and lifetime, more questions will pile up — it’s part of the labyrinthine task of studying quantum particles that multiply in complexity the closer scientists look. These deeper studies are worthwhile, says Downie. The inscrutable proton is “the most fundamental building block of everything, and until we understand that, we can’t say we understand anything else.”