Mysteries of time still stump scientists

The topic of time is both excruciatingly complicated and slippery. The combination makes it easy to get bogged down. But instead of an exhaustive review, journalist Alan Burdick lets curiosity be his guide in Why Time Flies, an approach that leads to a light yet supremely satisfying story about time as it runs through — and is perceived by — the human body.

Burdick doesn’t restrict himself to any one aspect of his question. He spends time excavating what he calls the “existential caverns,” where philosophical questions, such as the shifting concept of now, dwell. He describes the circadian clocks that keep bodies running efficiently, making sure our bodies are primed to digest food at mealtimes, for instance. He even covers the intriguing and slightly insane self-experimentation by the French scientist Michel Siffre, who crawled into caves in 1962 and 1972 to see how his body responded in places without any time cues.
In the service of his exploration, Burdick lived in constant daylight in the Alaskan Arctic for two summery weeks, visited the master timekeepers at the International Bureau of Weights and Measures in Paris to see how they precisely mete out the seconds and plunged off a giant platform to see if time felt slower during moments of stress. The book not only deals with fascinating temporal science but also how time is largely a social construct. “Time is what everybody agrees the time is,” one researcher told Burdick.
That subjective truth also applies to the brain. Time, in a sense, is created by the mind. “Our experience of time is not a cave shadow to some true and absolute thing; time is our perception,” Burdick writes. That subjective experience becomes obvious when Burdick recounts how easily our brains’ clocks can be swayed. Emotions, attention (SN: 12/10/16, p. 10) and even fever can distort our time perception, scientists have found.

Burdick delves deep into several neuroscientific theories of how time runs through the brain (SN: 7/25/15, p. 20). Here, the story narrows somewhat in an effort to thoroughly explain a few key ideas. But even amid these details, Burdick doesn’t lose the overarching truth  — that for the most part, scientists simply don’t know the answers. That may be because there is no one answer; instead, the brain may create time by stitching together a multitude of neural clocks.
After reading Why Time Flies, readers will be convinced that no matter how much time passes, the mystery of time will endure.

Germanium computer chips gain ground on silicon — again

First germanium integrated circuits

Integrated circuits made of germanium instead of silicon have been reported … by researchers at International Business Machines Corp. Even though the experimental devices are about three times as large as the smallest silicon circuits, they reportedly offer faster overall switching speed. Germanium … has inherently greater mobility than silicon, which means that electrons move through it faster when a current is applied. — Science News, February 25, 1967

UPDATE:
Silicon circuits still dominate computing. But demand for smaller, high-speed electronics is pushing silicon to its physical limits, sending engineers back for a fresh look at germanium. Researchers built the first compact, high-performance germanium circuit in 2014, and scientists continue to fiddle with its physical properties to make smaller, faster circuits. Although not yet widely used, germanium circuits and those made from other materials, such as carbon nanotubes, could help engineers make more energy-efficient electronics.

Helium’s inertness defied by high-pressure compound

Helium — the recluse of the periodic table — is reluctant to react with other elements. But squeeze the element hard enough, and it will form a chemical compound with sodium, scientists report.

Helium, a noble gas, is one of the periodic table’s least reactive elements. Originally, the noble gases were believed incapable of forming any chemical compounds at all. But after scientists created xenon compounds in the early 1960s, a slew of other noble gas compounds followed. Helium, however, has largely been a holdout.
Although helium was known to hook up with certain elements, the bonds in those compounds were weak, or the compounds were short-lived or electrically charged. But the new compound, called sodium helide or Na2He, is stable at high pressure, and its bonds are strong, an international team of scientists reports February 6 in Nature Chemistry.

As a robust helium compound, “this is really the first that people ever observed,” says chemist Maosheng Miao of California State University, Northridge, who was not involved with the research.

The material’s properties are still poorly understood, but it is unlikely to have immediate practical applications — scientists can create it only in tiny amounts at very high pressures, says study coauthor Alexander Goncharov, a physicist at the Carnegie Institution for Science in Washington, D.C. Instead, the oddball compound serves as inspiration for scientists who hope to produce weird new materials at lower pressures. “I would say that it’s not totally impossible,” says Goncharov. Scientists may be able to tweak the compound, for example, by adding or switching out elements, to decrease the pressure needed.

To coerce helium to link up with another element, the scientists, led by Artem Oganov of Stony Brook University in New York, first performed computer calculations to see which compounds might be possible. Sodium, calculations predicted, would form a compound with helium if crushed under enormously high pressure. Under such conditions, the typical rules of chemistry change — elements that refuse to react at atmospheric pressure can sometimes become bosom buddies when given a squeeze.

So Goncharov and colleagues pinched small amounts of helium and sodium between a pair of diamonds, reaching pressures more than a million times that of Earth’s atmosphere, and heated the material with lasers to temperatures above 1,500 kelvins (about 1200° Celsius). By scattering X-rays off the compound, the scientists could deduce its structure, which matched the one predicted by calculations.
“I think this is really the triumph of computation,” says Miao. In the search for new compounds, computers now allow scientists to skip expensive trial-and-error experiments and zero in on the best candidates to create in a laboratory.

Na2He is an unusual type of compound known as an electride, in which pairs of electrons are cloistered off, away from any atoms. But despite the compound’s bizarre nature, it behaves somewhat like a commonplace compound such as table salt, in which negatively charged chloride ions alternate with positively charged sodium. In Na2He, the isolated electron pairs act like negative ions in such a compound, and the eight sodium atoms surrounding each helium atom are the positive ions.

“The idea that you can make compounds with things like helium which don’t react at all, I think it’s pretty interesting,” says physicist Eugene Gregoryanz of the University of Edinburgh. But, he adds, “I would like to see more experiments” to confirm the result.

The scientists’ calculations also predicted that a compound of helium, sodium and oxygen, called Na2HeO, should form at even lower pressures, though that one has yet to be created in the lab. So the oddball new helium compound may soon have a confirmed cousin.

Earth’s mantle may be hotter than thought

Temperatures across Earth’s mantle are about 60 degrees Celsius higher than previously thought, a new experiment suggests. Such toasty temperatures would make the mantle runnier than earlier research suggested, a development that could help explain the details of how tectonic plates glide on top of the mantle, geophysicists report in the March 3 Science.

“Scientists have been arguing over the mantle temperature for decades,” says study coauthor Emily Sarafian, a geophysicist at the Woods Hole Oceanographic Institution in Massachusetts and at MIT. “Scientists will argue over 10 degree changes, so changing it by 60 degrees is quite a large jump.”
The mostly solid mantle sits between Earth’s crust and core and makes up around 84 percent of Earth’s volume. Heat from the mantle fuels volcanic eruptions and drives plate tectonics, but taking the mantle’s temperature is trickier than dropping a thermometer down a hole.

Scientists know from the paths of earthquake waves and from measures of how electrical charge moves through Earth that a boundary in the mantle exists a few dozen kilometers below Earth’s surface. Above that boundary, mantle rock can begin melting on its way up to the surface. By mimicking the extreme conditions in the deep Earth — squeezing and heating bits of mantle that erupt from undersea volcanoes or similar rocks synthesized in the lab — scientist can also determine the melting temperature of mantle rock. Using these two facts, scientists have estimated that temperatures at the boundary depth below Earth’s oceans are around 1314° C to 1464° C when adjusted to surface pressure.

But the presence of water in the collected mantle bits, primarily peridotite rock, which makes up much of the upper mantle, has caused problems for researchers’ calculations. Water can drastically lower the melting point of peridotite, but researchers can’t prevent the water content from changing over time. In previous experiments, scientists tried to completely dry peridotite samples and then manually correct for measured mantle water levels in their calculations. The scientists, however, couldn’t tell for sure if the samples were water-free.

The measurement difficulties stem from the fact that peridotite is a mix of the minerals olivine and pyroxene, and the mineral grains are too small to experiment with individually. Sarafian and colleagues overcame this challenge by inserting spheres of pure olivine large enough to study into synthetic peridotite samples. These spheres exchanged water with the surrounding peridotite until they had the same dampness, and so could be used for water content measurements.

Using this technique, the researchers found that the “dry” peridotite used in previous experiments wasn’t dry at all. In fact, the water content was spot on for the actual wetness of the mantle. “By assuming the samples are dry, then correcting for mantle water content, you’re actually overcorrecting,” Sarafian says.
The new experiment suggests that, if adjusted to surface pressure, the mantle under the eastern Pacific Ocean where two tectonic plates diverge, for example, would be around 1410°, up from 1350°. A hotter mantle is less viscous and more malleable, Sarafian says. Scientists have long been puzzled about some of the specifics of plate tectonics, such as to what extent the mantle resists the movement of the overlying plate. That resistance depends in part on the mix of rock, temperature and how melted the rock is at the boundary between the two layers (SN: 3/7/15, p. 6). This new knowledge could give researchers more accurate information on those details.

The revised temperature is only for the melting boundary in the mantle, so “it’s not the full story,” notes Caltech geologist Paul Asimow, who wrote a perspective on the research in the same issue of Science. He agrees that the team’s work provides a higher and more accurate estimate of that adjusted temperature, but he doesn’t think the researchers should assume temperatures elsewhere in the mantle would be boosted by a similar amount. “I’m not so sure about that,” he says. “We need further testing of mantle temperatures.”

Ancient dental plaque tells tales of Neandertal diet and disease

Dental plaque preserved in fossilized teeth confirms that Neandertals were flexible eaters and may have self-medicated with an ancient equivalent of aspirin.

DNA recovered from calcified plaque on teeth from four Neandertal individuals suggest that those from the grasslands around Beligum’s Spy cave ate woolly rhinoceros and wild sheep, while their counterparts from the forested El Sidrón cave in Spain consumed a menu of moss, mushrooms and pine nuts.

The evidence bolsters an argument that Neandertals’ diets spanned the spectrum of carnivory and herbivory based on the resources available to them, Laura Weyrich, a microbiologist at the University of Adelaide in Australia, and her colleagues report March 8 in Nature.

The best-preserved Neandertal remains were from a young male from El Sidrón whose teeth showed signs of an abscess. DNA from a diarrhea-inducing stomach bug and several gum disease pathogens turned up in his plaque. Genetic material from poplar trees, which contain the pain-killing aspirin ingredient salicylic acid, and a plant mold that makes the antibiotic penicillin hint that he may have used natural medication to ease his ailments.

The researchers were even able to extract an almost-complete genetic blueprint, or genome, for one ancient microbe, Methanobrevibacter oralis. At roughly 48,000 years old, it’s the oldest microbial genome sequenced, the researchers report.

Extreme gas loss dried out Mars, MAVEN data suggest

The Martian atmosphere definitely had more gas in the past.

Data from NASA’s MAVEN spacecraft indicate that the Red Planet has lost most of the gas that ever existed in its atmosphere. The results, published in the March 31 Science, are the first to quantify how much gas has been lost with time and offer clues to how Mars went from a warm, wet place to a cold, dry one.

Mars is constantly bombarded by charged particles streaming from the sun. Without a protective magnetic field to deflect this solar wind, the planet loses about 100 grams of its now thin atmosphere every second (SN: 12/12/15, p. 31). To determine how much atmosphere has been lost during the planet’s lifetime, MAVEN principal investigator Bruce Jakosky of the University of Colorado Boulder and colleagues measured and compared the abundances of two isotopes of argon at different altitudes in the Martian atmosphere. Using those measurements and an assumption about the amounts of the isotopes in the planet’s early atmosphere, the team estimates that about two-thirds of all of Mars’ argon gas has been ejected into space. Extrapolating from the argon data, the researchers also determined that the majority of carbon dioxide that the Martian atmosphere ever had also was kicked into space by the solar wind.

A thicker atmosphere filled with carbon dioxide and other greenhouse gases could have insulated early Mars and kept it warm enough for liquid water and possibly life. Losing an extreme amount of gas, as the results suggest, may explain how the planet morphed from lush and wet to barren and icy, the researchers write.

Cells’ stunning complexity on display in a new online portal

Computers don’t have eyes, but they could revolutionize the way scientists visualize cells.

Researchers at the Allen Institute for Cell Science in Seattle have devised 3-D representations of cells, compiled by computers learning where thousands of real cells tuck their component parts.

Most drawings of cells in textbooks come from human interpretations gleaned by looking at just a few dead cells at a time. The new Allen Cell Explorer, which premiered online April 5, presents 3-D images of genetically identical stem cells grown in lab dishes (composite, above), revealing a huge variety of structural differences.
Each cell comes from a skin cell that was reprogrammed into a stem cell. Important proteins were tagged with fluorescent molecules so researchers could keep tabs on the cell membrane, DNA-containing nucleus, energy-generating mitochondria, microtubules and other cell parts. Using the 3-D images, computer programs learned where the cellular parts are in relation to each other. From those rules, the programs can generate predictive transparent models of a cell’s structure (below). The new views, which can cap­ture cells at different time points, may offer clues into their inner workings.
The project’s tools are available for other researchers to use on various types of cells. Insights gained from the explorations might lead to a better understanding of human development, cancer, health and diseases.

Researchers have already learned from the project that stem cells aren’t the shapeless blobs they might appear to be, says Susanne Rafelski, a quantitative cell biologist at the Allen Institute. Instead, the stem cells have a definite bottom and top, a proposed structure that’s now confirmed by the combined cell data, Rafelski says. A solid foundation of skeleton proteins forms at the bottom. The nucleus is usually found in the cell’s center. Microtubules bundle together into large fibers that tend to radiate from the top of the cell toward the bottom. During cell division, microtubules form structures called bipolar spindles that are necessary to divvy up DNA.
One surprise was that the membrane surrounding the nucleus gets ruffled, but never completely disappears, during cell division. Near the top of the cell, above the nucleus, stem cells store tubelike mitochondria much the way plumbing and electrical wires are tucked into ceilings. The tubular mitochondria were notable because some researchers thought that since stem cells don’t require much energy, the organelles might separate into small, individual units.

Old ways of observing cells were like trying to get to know a city by looking at a map, Rafelski says. The cell explorer is more like a documentary of the lives of the citizens.

There’s still a lot we don’t know about the proton

Nuclear physicist Evangeline Downie hadn’t planned to study one of the thorniest puzzles of the proton.

But when opportunity knocked, Downie couldn’t say no. “It’s the proton,” she exclaims. The mysteries that still swirl around this jewel of the subatomic realm were too tantalizing to resist. The plentiful particles make up much of the visible matter in the universe. “We’re made of them, and we don’t understand them fully,” she says.

Many physicists delving deep into the heart of matter in recent decades have been lured to the more exotic and unfamiliar subatomic particles: mesons, neutrinos and the famous Higgs boson — not the humble proton.
But rather than chasing the rarest of the rare, scientists like Downie are painstakingly scrutinizing the proton itself with ever-higher precision. In the process, some of these proton enthusiasts have stumbled upon problems in areas of physics that scientists thought they had figured out.

Surprisingly, some of the particle’s most basic characteristics are not fully pinned down. The latest measurements of its radius disagree with one another by a wide margin, for example, a fact that captivated Downie. Likewise, scientists can’t yet explain the source of the proton’s spin, a basic quantum property. And some physicists have a deep but unconfirmed suspicion that the seemingly eternal particles don’t live forever — protons may decay. Such a decay is predicted by theories that unite disparate forces of nature under one grand umbrella. But decay has not yet been witnessed.

Like the base of a pyramid, the physics of the proton serves as a foundation for much of what scientists know about the behavior of matter. To understand the intricacies of the universe, says Downie, of George Washington University in Washington, D.C., “we have to start with, in a sense, the simplest system.”

Sizing things up
For most of the universe’s history, protons have been VIPs — very important particles. They formed just millionths of a second after the Big Bang, once the cosmos cooled enough for the positively charged particles to take shape. But protons didn’t step into the spotlight until about 100 years ago, when Ernest Rutherford bombarded nitrogen with radioactively produced particles, breaking up the nuclei and releasing protons.

A single proton in concert with a single electron makes up hydrogen — the most plentiful element in the universe. One or more protons are present in the nucleus of every atom. Each element has a unique number of protons, signified by an element’s atomic number. In the core of the sun, fusing protons generate heat and light needed for life to flourish. Lone protons are also found as cosmic rays, whizzing through space at breakneck speeds, colliding with Earth’s atmosphere and producing showers of other particles, such as electrons, muons and neutrinos.

In short, protons are everywhere. Even minor tweaks to scientists’ understanding of the minuscule particle, therefore, could have far-reaching implications. So any nagging questions, however small in scale, can get proton researchers riled up.

A disagreement of a few percent in measurements of the proton’s radius has attracted intense interest, for example. Until several years ago, scientists agreed: The proton’s radius was about 0.88 femtometers, or 0.88 millionths of a billionth of a meter — about a trillionth the width of a poppy seed.
But that neat picture was upended in the span of a few hours, in May 2010, at the Precision Physics of Simple Atomic Systems conference in Les Houches, France. Two teams of scientists presented new, more precise measurements, unveiling what they thought would be the definitive size of the proton. Instead the figures disagreed by about 4 percent (SN: 7/31/10, p. 7). “We both expected that we would get the same number, so we were both surprised,” says physicist Jan Bernauer of MIT.

By itself, a slight revision of the proton’s radius wouldn’t upend physics. But despite extensive efforts, the groups can’t explain why they get different numbers. As researchers have eliminated simple explanations for the impasse, they’ve begun wondering if the mismatch could be the first hint of a breakdown that could shatter accepted tenets of physics.

The two groups each used different methods to size up the proton. In an experiment at the MAMI particle accelerator in Mainz, Germany, Bernauer and colleagues estimated the proton’s girth by measuring how much electrons’ trajectories were deflected when fired at protons. That test found the expected radius of about 0.88 femtometers (SN Online: 12/17/10).

But a team led by physicist Randolf Pohl of the Max Planck Institute of Quantum Optics in Garching, Germany, used a new, more precise method. The researchers created muonic hydrogen, a proton that is accompanied not by an electron but by a heftier cousin — a muon.

In an experiment at the Paul Scherrer Institute in Villigen, Switzerland, Pohl and collaborators used lasers to bump the muons to higher energy levels. The amount of energy required depends on the size of the proton. Because the more massive muon hugs closer to the proton than electrons do, the energy levels of muonic hydrogen are more sensitive to the proton’s size than ordinary hydrogen, allowing for measurements 10 times as precise as electron-scattering measurements.

Pohl’s results suggested a smaller proton radius, about 0.841 femtometers, a stark difference from the other measurement. Follow-up measurements of muonic deuterium — which has a proton and a neutron in its nucleus — also revealed a smaller than expected size, he and collaborators reported last year in Science. Physicists have racked their brains to explain why the two measurements don’t agree. Experimental error could be to blame, but no one can pinpoint its source. And the theoretical physics used to calculate the radius from the experimental data seems solid.

Now, more outlandish possibilities are being tossed around. An unexpected new particle that interacts with muons but not electrons could explain the difference (SN: 2/23/13, p. 8). That would be revolutionary: Physicists believe that electrons and muons should behave identically in particle interactions. “It’s a very sacred principle in theoretical physics,” says John Negele, a theoretical particle physicist at MIT. “If there’s unambiguous evidence that it’s been broken, that’s really a fundamental discovery.”

But established physics theories die hard. Shaking the foundations of physics, Pohl says, is “what I dream of, but I think that’s not going to happen.” Instead, he suspects, the discrepancy is more likely to be explained through minor tweaks to the experiments or the theory.

The alluring mystery of the proton radius reeled Downie in. During conversations in the lab with some fellow physicists, she learned of an upcoming experiment that could help settle the issue. The experiment’s founders were looking for collaborators, and Downie leaped on the bandwagon. The Muon Proton Scattering Experiment, or MUSE, to take place at the Paul Scherrer Institute beginning in 2018, will scatter both electrons and muons off of protons and compare the results. It offers a way to test whether the two particles behave differently, says Downie, who is now a spokesperson for MUSE.

A host of other experiments are in progress or planning stages. Scientists with the Proton Radius Experiment, or PRad, located at Jefferson Lab in Newport News, Va., hope to improve on Bernauer and colleagues’ electron-scattering measurements. PRad researchers are analyzing their data and should have a new number for the proton radius soon.

But for now, the proton’s identity crisis, at least regarding its size, remains. That poses problems for ultrasensitive tests of one of physicists’ most essential theories. Quantum electrodynamics, or QED, the theory that unites quantum mechanics and Albert Einstein’s special theory of relativity, describes the physics of electromagnetism on small scales. Using this theory, scientists can calculate the properties of quantum systems, such as hydrogen atoms, in exquisite detail — and so far the predictions match reality. But such calculations require some input — including the proton’s radius. Therefore, to subject the theory to even more stringent tests, gauging the proton’s size is a must-do task.
Spin doctors
Even if scientists eventually sort out the proton’s size snags, there’s much left to understand. Dig deep into the proton’s guts, and the seemingly simple particle becomes a kaleidoscope of complexity. Rattling around inside each proton is a trio of particles called quarks: one negatively charged “down” quark and two positively charged “up” quarks. Neutrons, on the flip side, comprise two down quarks and one up quark.

Yet even the quark-trio picture is too simplistic. In addition to the three quarks that are always present, a chaotic swarm of transient particles churns within the proton. Evanescent throngs of additional quarks and their antimatter partners, antiquarks, continually swirl into existence, then annihilate each other. Gluons, the particle “glue” that holds the proton together, careen between particles. Gluons are the messengers of the strong nuclear force, an interaction that causes quarks to fervently attract one another.
As a result of this chaos, the properties of protons — and neutrons as well — are difficult to get a handle on. One property, spin, has taken decades of careful investigation, and it’s still not sorted out. Quantum particles almost seem to be whirling at blistering speed, like the Earth rotating about its axis. This spin produces angular momentum — a quality of a rotating object that, for example, keeps a top revolving until friction slows it. The spin also makes protons behave like tiny magnets, because a rotating electric charge produces a magnetic field. This property is the key to the medical imaging procedure called magnetic resonance imaging, or MRI.

But, like nearly everything quantum, there’s some weirdness mixed in: There’s no actual spinning going on. Because fundamental particles like quarks don’t have a finite physical size — as far as scientists know — they can’t twirl. Despite the lack of spinning, the particles still behave like they have a spin, which can take on only certain values: integer multiples of 1/2.

Quarks have a spin of 1/2, and gluons a spin of 1. These spins combine to help yield the proton’s total spin. In addition, just as the Earth is both spinning about its own axis and orbiting the sun, quarks and gluons may also circle about the proton’s center, producing additional angular momentum that can contribute to the proton’s total spin.

Somehow, the spin and orbital motion of quarks and gluons within the proton combine to produce its spin of 1/2. Originally, physicists expected that the explanation would be simple. The only particles that mattered, they thought, were the proton’s three main quarks, each with a spin of 1/2. If two of those spins were oriented in opposite directions, they could cancel one another out to produce a total spin of 1/2. But experiments beginning in the 1980s showed that “this picture was very far from true,” says theoretical high-energy physicist Juan Rojo of Vrije University Amsterdam. Surprisingly, only a small fraction of the spin seemed to be coming from the quarks, befuddling scientists with what became known as the “spin crisis” (SN: 9/6/97, p. 158). Neutron spin was likewise enigmatic.

Scientists’ next hunch was that gluons contribute to the proton’s spin. “Verifying this hypothesis was very difficult,” Rojo says. It required experimental studies at the Relativistic Heavy Ion Collider, RHIC, a particle accelerator at Brookhaven National Laboratory in Upton, N.Y.

In these experiments, scientists collided protons that were polarized: The two protons’ spins were either aligned or pointed in opposite directions. Researchers counted the products of those collisions and compared the results for aligned and opposing spins. The results revealed how much of the spin comes from gluons. According to an analysis by Rojo and colleagues, published in Nuclear Physics B in 2014, gluons make up about 35 percent of the proton’s spin. Since the quarks make up about 25 percent, that leaves another 40 percent still unaccounted for.

“We have absolutely no idea how the entire spin is made up,” says nuclear physicist Elke-Caroline Aschenauer of Brookhaven. “We maybe have understood a small fraction of it.” That’s because each quark or gluon carries a certain fraction of the proton’s energy, and the lowest energy quarks and gluons cannot be spotted at RHIC. A proposed collider, called the Electron-Ion Collider (location to be determined), could help scientists investigate the neglected territory.

The Electron-Ion Collider could also allow scientists to map the still-unmeasured orbital motion of quarks and gluons, which may contribute to the proton’s spin as well.
An unruly force
Experimental physicists get little help from theoretical physics when attempting to unravel the proton’s spin and its other perplexities. “The proton is not something you can calculate from first principles,” Aschenauer says. Quantum chromo-dynamics, or QCD — the theory of the quark-corralling strong force transmitted by gluons — is an unruly beast. It is so complex that scientists can’t directly solve the theory’s equations.

The difficulty lies with the behavior of the strong force. As long as quarks and their companions stick relatively close, they are happy and can mill about the proton at will. But absence makes the heart grow fonder: The farther apart the quarks get, the more insistently the strong force pulls them back together, containing them within the proton. This behavior explains why no one has found a single quark in isolation. It also makes the proton’s properties especially difficult to calculate. Without accurate theoretical calculations, scientists can’t predict what the proton’s radius should be, or how the spin should be divvied up.
To simplify the math of the proton, physicists use a technique called lattice QCD, in which they imagine that the world is made of a grid of points in space and time (SN: 8/7/04, p. 90). A quark can sit at one point or another in the grid, but not in the spaces in between. Time, likewise, proceeds in jumps. In such a situation, QCD becomes more manageable, though calculations still require powerful supercomputers.

Lattice QCD calculations of the proton’s spin are making progress, but there’s still plenty of uncertainty. In 2015, theoretical particle and nuclear physicist Keh-Fei Liu and colleagues calculated the spin contributions from the gluons, the quarks and the quarks’ angular momentum, reporting the results in Physical Review D. By their calculation, about half of the spin comes from the quarks’ motion within the proton, about a quarter from the quarks’ spin, with the last quarter or so from the gluons. The numbers don’t exactly match the experimental measurements, but that’s understandable — the lattice QCD numbers are still fuzzy. The calculation relies on various approximations, so it “is not cast in stone,” says Liu, of the University of Kentucky in Lexington.

Death of a proton
Although protons seem to live forever, scientists have long questioned that immortality. Some popular theories predict that protons decay, disintegrating into other particles over long timescales. Yet despite extensive searches, no hint of this demise has materialized.

A class of ideas known as grand unified theories predict that protons eventually succumb. These theories unite three of the forces of nature, creating a single framework that could explain electromagnetism, the strong nuclear force and the weak nuclear force, which is responsible for certain types of radioactive decay. (Nature’s fourth force, gravity, is not yet incorporated into these models.) Under such unified theories, the three forces reach equal strengths at extremely high energies. Such energetic conditions were present in the early universe — well before protons formed — just a trillionth of a trillionth of a trillionth of a second after the Big Bang. As the cosmos cooled, those forces would have separated into three different facets that scientists now observe.
“We have a lot of circumstantial evidence that something like unification must be happening,” says theoretical high-energy physicist Kaladi Babu of Oklahoma State University in Stillwater. Beyond the appeal of uniting the forces, grand unified theories could explain some curious coincidences of physics, such as the fact that the proton’s electric charge precisely balances the electron’s charge. Another bonus is that the particles in grand unified theories fill out a family tree, with quarks becoming the kin of electrons, for example.

Under these theories, a decaying proton would disintegrate into other particles, such as a positron (the antimatter version of an electron) and a particle called a pion, composed of a quark and an antiquark, which itself eventually decays. If such a grand unified theory is correct and protons do decay, the process must be extremely rare — protons must live a very long time, on average, before they break down. If most protons decayed rapidly, atoms wouldn’t stick around long either, and the matter that makes up stars, planets — even human bodies — would be falling apart left and right.

Protons have existed for 13.8 billion years, since just after the Big Bang. So they must live exceedingly long lives, on average. But the particles could perish at even longer timescales. If they do, scientists should be able to monitor many particles at once to see a few protons bite the dust ahead of the curve (SN: 12/15/79, p. 405). But searches for decaying protons have so far come up empty.

Still, the search continues. To hunt for decaying protons, scientists go deep underground, for example, to a mine in Hida, Japan. There, at the Super-Kamiokande experiment (SN: 2/18/17, p. 24), they monitor a giant tank of water — 50,000 metric tons’ worth — waiting for a single proton to wink out of existence. After watching that water tank for nearly two decades, the scientists reported in the Jan. 1 Physical Review D that protons must live longer than 1.6 × 1034 years on average, assuming they decay predominantly into a positron and a pion.

Experimental limits on the proton lifetime “are sort of painting the theorists into a corner,” says Ed Kearns of Boston University, who searches for proton decay with Super-K. If a new theory predicts a proton lifetime shorter than what Super-K has measured, it’s wrong. Physicists must go back to the drawing board until they come up with a theory that agrees with Super-K’s proton-decay drought.

Many grand unified theories that remain standing in the wake of Super-K’s measurements incorporate supersymmetry, the idea that each known particle has another, more massive partner. In such theories, those new particles are additional pieces in the puzzle, fitting into an even larger family tree of interconnected particles. But theories that rely on supersymmetry may be in trouble. “We would have preferred to see supersymmetry at the Large Hadron Collider by now,” Babu says, referring to the particle accelerator located at the European particle physics lab, CERN, in Geneva, which has consistently come up empty in supersymmetry searches since it turned on in 2009 (SN: 10/1/16, p. 12).

But supersymmetric particles could simply be too massive for the LHC to find. And some grand unified theories that don’t require supersymmetry still remain viable. Versions of these theories predict proton lifetimes within reach of an upcoming generation of experiments. Scientists plan to follow up Super-K with Hyper-K, with an even bigger tank of water. And DUNE, the Deep Underground Neutrino Experiment, planned for installation in a former gold mine in Lead, S.D., will use liquid argon to detect protons decaying into particles that the water detectors might miss.
If protons do decay, the universe will become frail in its old age. According to Super-K, sometime well after its 1034 birthday, the cosmos will become a barren sea of light. Stars, planets and life will disappear. If seemingly dependable protons give in, it could spell the death of the universe as we know it.

Although protons may eventually become extinct, proton research isn’t going out of style anytime soon. Even if scientists resolve the dilemmas of radius, spin and lifetime, more questions will pile up — it’s part of the labyrinthine task of studying quantum particles that multiply in complexity the closer scientists look. These deeper studies are worthwhile, says Downie. The inscrutable proton is “the most fundamental building block of everything, and until we understand that, we can’t say we understand anything else.”

Top 10 science anniversaries of 2017

Every year science offers a diverse menu of anniversaries to celebrate. Births (or deaths) of famous scientists, landmark discoveries or scientific papers — significant events of all sorts qualify for celebratory consideration, as long as the number of years gone by is some worthy number, like 25, 50, 75 or 100. Or simple multiples thereof with polysyllabic names.

2017 has more than enough such anniversaries for a Top 10 list, so some worthwhile events don’t even make the cut, such as the births of Stephen Hawking (1942) and Arthur C. Clarke (1917). The sesquicentennial of Michael Faraday’s death (1867) almost made the list, but was bumped at the last minute by a book. Namely:

  1. On Growth and Form, centennial (1917)
    A true magnum opus, by the Scottish biologist D’Arcy Wentworth Thompson, On Growth and Form has inspired many biologists with its mathematical analysis of physical and structural forces underlying the diversity of shapes and forms in the biological world. Nobel laureate biologist Sir Peter Medawar praised Thompson’s book as “beyond comparison the finest work of literature in all the annals of science that have been recorded in the English tongue.”
  2. Birth of Abraham de Moivre, semiseptcentennial (1667).
    Born in France on May 26, 1667, de Moivre moved as a young man to London where he did his best work, earning election to the Royal Society. Despite exceptional mathematical skill, though, he attained no academic position and earned a meager living as a tutor. He is most famous for his book The Doctrine of Chances, which was in essence an 18th century version of Gambling for Dummies. It contained major advances in probability theory and in later editions introduced the concept of the famous bell curve. Isaac Newton was impressed; the legend goes that when anyone asked him about probability, Newton said to go talk to de Moivre.
  3. Exoplanets, quadranscentennial (1992)It seems like exoplanets have been around almost forever (and probably actually were), but the first confirmed by Earthbound astronomers were reported just a quarter century ago. Three planets showed up orbiting not an ordinary star, but a pulsar, a rapidly spinning neutron star left behind by a supernova.
    Astrophysicists Aleksander Wolszczan and Dale Frail found a sign of the planets, first detected with the Arecibo radio telescope, in irregularities in the radio pulses from the millisecond pulsar PSR1257+12. Some luck was involved. In 1990, the Arecibo telescope was being repaired and couldn’t pivot to point at a specific target; instead it constantly watched just one region of the sky. PSR1257+12 just happened to float by.
  4. Birth of Marie Curie, sesquicentennial (1867)
    No doubt the most famous Polish-born scientist since Copernicus, Curie was born in Warsaw on November 7, 1867, as Maria Sklodowska. Challenged by poverty, family tragedies and poor health, she nevertheless excelled as a high school student. But she then worked as a governess, while continuing as much science education as possible, until her married sister invited her to Paris. There she completed her physics education with honors and met and married another young physicist, Pierre Curie.

Together they tackled the mystery of the newly discovered radioactivity, winning the physics Nobel in 1903 along with radioactivity’s discoverer, Henri Becquerel. Marie continued the work after her husband’s tragic death in 1906; she became the first person to win a second Nobel, awarded in chemistry in 1911 for her discovery of the new radioactive elements polonium and radium.

  1. Laws of Robotics, semisesquicentennial (1942)
    One of science fiction’s greatest contributions to modern technological philosophy was Isaac Asimov’s Laws of Robotics, which first appeared in a short story in the March 1942 issue of Astounding Science Fiction. Later, those laws formed the motif of his many robot novels and appeared in his famous Foundation Trilogy (and subsequent sequels and prequels). They were:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Much later Asimov added a “zeroth law,” requiring robots to protect all of humankind even if that meant violating the other three laws. Artificial intelligence researchers all know about Asimov’s laws, but somehow have not managed to enforce them on social media. Incidentally, this year is also the quadranscentennial of Asimov’s death in 1992.

  1. First sustained nuclear fission chain reaction, semisesquicentennial (1942)
    Enrico Fermi, the Italian Nobel laureate, escaped fascist Italy to come to the United States shortly after nuclear fission’s discovery in Germany. Fermi directed construction of the “atomic pile,” or nuclear reactor, on a squash court under the stands of the University of Chicago’s football stadium. Fermi and his collaborators showed that neutrons emitted from fissioning uranium nuclei could induce more fission, creating a chain reaction capable of releasing enormous amounts of energy. Which it later did.
  2. Discovery of pulsars, semicentennial (1967)
    Science’s awareness of the existence of pulsars turns 50 this year, thanks to the diligence of Irish astrophysicist Jocelyn Bell Burnell. She spent many late-night hours examining the data recordings from the radio telescope she helped to build that first spotted a signal from a pulsar. She recognized that the signal was something special even though others thought it was just a glitch in the apparatus. But she was a graduate student so her supervisor got the Nobel Prize instead of her.
  3. Einstein’s theory of lasers, centennial (1917)
    Albert Einstein did not actually invent the laser, but he developed the mathematical understanding that made lasers possible. By 1917, physicists knew that quantum physics played a part in the working of atoms, but the details were fuzzy. Niels Bohr had shown in 1913 that an atom’s electrons occupy different energy levels, and that falling from a high energy level to a lower one emits radiation.

Einstein worked out the math describing this process when many atoms have electrons in high-energy states and emit radiation. His analysis of matter-radiation interaction indicated that it would be possible to prepare many atoms in the same high-energy state and then stimulate them to emit radiation all at once. Properly done, all the atoms would emit radiation of identical wavelength with the waves in phase. A few decades later other physicists figured out how to build such a device for use as a powerful weapon or to read bar codes at grocery stores.

  1. Qubits, quadranscentennial (1992)
    An even better quantum anniversary than lasers is the presentation to the world of the concept of quantum bits of information. Physicist Ben Schumacher of Kenyon College in Ohio unveiled the idea at a conference in Dallas in 1992 (I was there). A “quantum bit” of information, or qubit, represents the information contained in a quantum particle, which can exist in multiple states at once. A photon, for instance, might simultaneously be in a state of horizontal or vertical polarization. Or an electron’s spin could be up and down at the same time.

Such states differ from classical bits of information in a computer, recorded as either a 0 or 1; a quantum bit is both 0 and 1 at the same time. It becomes one or the other only when observed, much like a flipped coin is nether heads nor tails until somebody catches it, or it lands on the 50 yard line. Schumacher’s idea did not get a lot of attention at first, but it eventually became the foundational idea for quantum information theory, a field now booming with efforts to construct a quantum computer based on the manipulation of qubits.

  1. Birth of modern cosmology, centennial (1917)
    It might seem unfair that Einstein gets two Top 10 anniversaries in 2017, but 1917 was a good year for him. Before publishing his laser paper, Einstein tweaked the equations of his brand-new general theory of relativity in order to better explain the universe (details in Part 1). Weirdly, Einstein didn’t understand the universe, and he later thought the term he added to his equations was a mistake. But it turns out that today’s understanding of the universe’s behavior — expanding at an accelerating rate — seems to require the term that Einstein thought he had added erroneously. But you can’t expect Einstein to have foreseen everything. He probably had no idea that lasers would revolutionize grocery shopping either.

The scales of the ocellated lizard are surprisingly coordinated

A lizard’s intricately patterned skin follows rules like those used by a simple type of computer program.

As the ocellated lizard (Timon lepidus) grows, it transforms from a drab, polka-dotted youngster to an emerald-flecked adult. Its scales first morph from white and brown to green and black. Then, as the animal ages, individual scales flip from black to green, or vice versa.

Biophysicist Michel Milinkovitch of the University of Geneva realized that the scales weren’t changing their colors by chance. “You have chains of green and chains of black, and they form this labyrinthine pattern that very clearly is not random,” he says. That intricate ornamentation, he and colleagues report April 13 in Nature, can be explained by a cellular automaton, a concept developed by mathematicians in the 1940s and ’50s to simulate diverse complex systems.
A cellular automaton is composed of a grid of colored pixels. Using a set of rules, each pixel has a chance of switching its shade, based on the colors of surrounding pixels. By comparing photos of T. lepidus at different ages, the scientists showed that its scales obey such rules.
In the adult lizard, if a black scale is surrounded by other black scales, it is more likely to switch than a black one bounded by green, the researchers found. Eventually, the lizards’ scales settle down into a mostly stable state. Black scales wind up with around three green neighbors, and green scales have around four black ones. The researchers propose that interacting pigment cells could explain the color flips.

Computer scientists use cellular automata to simulate the real world, re-creating the turbulent motions of fluids or nerve cell activity in the brain, for example. But the new study is the first time the process has been seen with the naked eye in a real-life animal.
The scales on an ocellated lizard change color as the animal ages (more than three years of growth shown in first clip). Circles highlight four instances of color-flipping scales. Blue circles indicate a scale that switches from green to black, the green circle indicates a black to green transformation, and the light blue circle marks a scale that flip-flops from green to black to green. Researchers used a cellular automaton to simulate the adult lizard’s color-swapping scales (second clip), and re-create the labyrinthine patterns that develop on its skin.