New habitat monitoring tools find hope for tigers

There’s still enough forest left — if protected wisely — to meet the goal of doubling the number of wild tigers (Panthera tigris) by 2022, says an international research team.

That ambitious target, set by a summit of 13 tiger-range nations in 2010, aims to reverse the species’ alarming plunge toward extinction. Forest loss, poaching and dwindling prey have driven tiger numbers below 3,500 individuals.

The existing forest habitat could sustain the doubling if, for instance, safe-travel corridors connect forest patches, according to researchers monitoring forest loss with free, anybody-can-use-’em Web tools. Previously, habitat monitoring was piecemeal, in part because satellite imagery could be expensive and required special expertise, says Anup Joshi of the University of Minnesota in St. Paul. But Google Earth Engine and Global Forest Watch provide faster, easier, more consistent ways to keep an eye out for habitat losses as small as 30 meters by 30 meters (the space revealed in a pixel).
Looking at 14 years of data, 76 major tiger landscapes altogether have lost less than 8 percent of forest, the researchers say April 1 in Science Advances. Finding so little loss is “remarkable and unexpected,” they write. But 10 of those landscapes account for most of the losses — highlighting the challenges conservationists, and tigers, face.

Vaccines could counter addictive opioids

By age 25, Patrick Schnur had cycled through a series of treatment programs, trying different medications to kick his heroin habit. But the drugs posed problems too: Vivitrol injections were painful and created intense heroin cravings as the drug wore off. Suboxone left him drowsy, depressed and unable to study or go running like he wanted to. Determined to resume the life he had before his addiction, Schnur decided to hunker down and get clean on his own.

In December 2015, he had been sober for two years and had just finished his first semester of college, with a 4.0 grade point average. Yet, just before the holidays, he gave in to the cravings. Settling into his dorm room he stuck a needle in his vein. It was his last shot.
Scientists are searching for a different kind of shot to prevent such tragedies: a vaccine to counter addiction to heroin and other opioids, such as the prescription painkiller fentanyl and similar knockoff drugs. In some ways, the vaccines work like traditional vaccines for infectious diseases such as measles, priming the immune system to attack foreign molecules. But instead of targeting viruses, the vaccines zero in on addictive chemicals, training the immune system to usher the drugs out of the body before they can reach the brain.

Such a vaccine may have helped Schnur, a onetime computer whiz who grew up in the Midwest, far removed from the hard edges of the drug world. His overdose death reflects a growing heroin epidemic and alarming trend. In the 1960s, heroin was seen as a hard-core street drug abused mostly in inner cities. Now heroin is a problem in many suburban and rural towns across America, where it is used primarily by young, white adults — male and female, according to research published by psychiatrist Theodore Cicero of Washington University in St. Louis and colleagues in 2014 in JAMA Psychiatry.
His team’s surveys of nearly 2,800 patients in substance abuse treatment programs suggest a shift in the demographics of heroin users in recent years. In the 1960s, more than 80 percent of users took heroin as their first opioid. From 2000 to 2010, 75 percent of heroin users came to the drug because it was easier to get and less expensive than the prescription opioids they had been taking.

In recent decades, overdoses of both illicit and prescription drugs have surged. In 2014, overdose deaths surpassed deaths from motor vehicle accidents, the U.S. Centers for Disease Control and Prevention reported in January. In that year, 28,647 people died of opioid-related overdoses, primarily from prescription pain relievers and heroin.

“The opioid epidemic is devastating and the number of people dying demands an urgent intervention,” says Nora Volkow, director of the U.S. National Institute on Drug Abuse.

A family of drugs
The term opioid refers to a host of painkillers derived from the opium poppy as well as synthetic versions of its active compounds. Heroin is processed from morphine, which is extracted from the plant. Prescription medications such as Vicodin, Percocet, OxyContin and fentanyl are made from synthetic morphine, altered to produce different effects.

Currently, three medications, sold under various brand names, are available to help people with heroin or opioid addiction get clean and stay drug-free: methadone, buprenorphine and naltrexone. The treatments work, Volkow says, but not perfectly. Some addicted patients, such as Schnur, experience unwanted side effects from the daily or monthly treatments and stop using them. Others lack access to treatments due to high costs and strict federal limits on dispensing the drugs.

“Unfortunately, only a small percentage — about 25 percent — of people who could benefit from treatment actually get these medications,” Volkow says.
Round two for vaccines
Vaccines could offer an alternative to patients who have kicked their habit and want to stay clean, scientists say. The vaccines aim to make an addict immune to a drug’s effects, decreasing the motivation to seek more of the drug. That’s important, Volkow says, because over time the treatment may allow recovery of the overactive circuitry in the brain that pushes drug users to keep using.

The idea of antidrug vaccines isn’t new. Scientists began working on formulations in the 1970s, but those efforts were eclipsed by the availability of methadone. Methadone, a synthetic opioid, relieves withdrawal symptoms and cravings for heroin or prescription painkillers by acting on the same brain targets as the drugs, but in a slow, controlled manner, so patients can function normally without feeling high. But the treatment is a method for harm reduction, not a cure for addiction, and must be taken daily to be effective.

In the late 1990s, scientists resumed antidrug vaccine efforts, focusing on vaccines for everything from cocaine to nicotine to heroin (SN: 2/10/07, p. 90). Vaccines for nicotine and cocaine were tested in people, but worked for only a small percentage.

Now, to help combat the growing opioid addiction crisis, two vaccines for heroin users are advancing toward human trials and other antiopioid vaccines are in the pipeline, including one for fentanyl, now a popular street drug.

Among the antiheroin vaccines being tested, one — developed at the Scripps Research Institute in La Jolla, Calif. — spurs the immune system to attack heroin and helps eliminate it from the body so effectively that it can neutralize even lethal levels of the drug in animals. A second anti­heroin vaccine, developed at the Walter Reed Army Institute of Research in Silver Spring, Md., goes after two closely linked problems: It keeps heroin from reaching the brain while preventing HIV infection.

Addiction’s grip
Once a person is addicted, the fight to stay clean never ends, Volkow says. That’s because heroin and other addictive substances alter the brain’s pleasure circuits, producing changes that persist long after users stop taking the drug. Volkow, who has studied these effects for more than two decades, says addiction is a brain disease because of the structural and functional changes that occur.
Drugs of abuse produce their high by interacting with cells located in brain areas that govern reward, including the nucleus accumbens, a key region in the pleasure circuit. Though each type of drug works in a slightly different way, all addictive drugs increase the amount of the chemical dopamine in this area. Dopamine is a neurotransmitter, carrying signals between nerve cells, or neurons.

Opioids boost dopamine levels by stimulating molecules called mu receptors that sit on the surface of certain neurons. Normally, these receptors are activated by hormones and brain chemicals made in the body, such as endorphins, to reinforce pleasurable behavior such as eating, having sex or listening to music. A single dose of heroin, however, releases many times the amount of dopamine produced by a favorite food or song.

Dopamine fuels the high that people feel from taking an addictive drug, but other molecules help to get people hooked. Glutamate, a neurotransmitter that increases the chatter among cells in areas that govern learning and boost motivation, helps engrave the experience of a drug’s high into the brain. Memories of the high become so enduring that years later they can be reawakened. This long-lasting pull is why more than 60 percent of people with addiction experience relapse within the first year after they are discharged from treatment.

Taken over time, drugs of abuse can change signaling in a number of the brain’s circuits. Last year in Cell, Volkow and NIDA biochemist Marisela Morales outlined two common features of the addicted brain: a decreased sensitivity in the brain’s reward centers and disruption of circuits involved in self-control.

With repeated drug use, the number of dopamine receptors declines as the brain attempts to calm down, Volkow says. With fewer receptors available to take up dopamine molecules, it takes more stimulation to produce feelings of pleasure. Addicts soon find that they are no longer motivated by everyday activities that had been enjoyable or exciting, and they need higher doses of the drug to get the euphoric feelings once provided by smaller doses.

“The brain rapidly learns that the only thing that’s going to stimulate these pleasure circuits is the drug,” Volkow says. “That’s one of the components that drives drug-seeking behavior.” Eventually, the drug no longer produces a high. Instead, it becomes a necessity to stave off feelings of anxiety and despair.

Addiction also impairs dopamine functioning in the prefrontal cortex, an area of the brain that includes regions involved in analysis, decision making and self-control. “Taking drugs interferes with one’s capacity to make good decisions” and follow through, Volkow says. “An addict might say ‘I don’t want to take that drug.’ But they don’t have the capacity to easily change their behavior.”

Protect the brain
Vaccines, potentially, offer a “transformative” way to treat addiction, Volkow says, because the treatments can train the immune system to attack drug molecules before they reach the brain. Vaccines typically contain an agent that resembles a disease-causing virus, teaching the immune system to respond quickly when it encounters the invader. In designing vaccines, scientists try to provoke at least one of the human body’s primary immune responders: T cells, which attack infected cells, or B cells, which release antibodies that recognize hostile molecules and attach to them, targeting them for destruction.

Easier said than done. For starters, drug molecules are tiny, much smaller than a bacterium or virus, and are not easily detected by the immune system. In addition, the body’s immune system is set up to fight invaders that arrive in small groups. When an influenza virus makes its way into a body, the initial levels of virus in the blood are very low, Volkow says. But when people inject heroin, for example, many millions of drug molecules and their breakdown products quickly rush into the bloodstream. In recent years, researchers have found new ways to help call the immune system’s attention to such surges of “invading” drugs.

While developing one heroin vaccine, chemist Kim Janda of Scripps and colleagues noticed that antibodies to heroin molecules alone didn’t stop animals from getting high. That’s because once heroin gets into the body — whether it’s injected, snorted or smoked — it is broken down into its active components, 6-acetylmorphine, or 6-AM, and morphine. “Those two metabolites are the real drugs in heroin,” Janda says.

Typically, vaccines lead to production of antibodies that target a single invader. To get the immune system to notice both heroin and its metabolites, Janda joined forces with neurobiologist George Koob, director of the National Institute on Alcohol Abuse and Alcoholism, to design a multitarget vaccine. The vaccine “cocktail,” as Janda calls it, has three components: a large protein that carries the druglike molecules into the body; a molecule called a hapten, chemically designed to induce an immune response to heroin and its metabolites 6-AM and morphine; and finally, alum, an agent commonly added to vaccines to stimulate release of cytokines, proteins that help rally the immune cells to fight invaders.

Over the last six years, Janda’s group has tinkered with the hapten to help the antibodies get a tight grip on heroin, 6-AM and morphine. The hapten, along with the protein carrier, draws attention from the immune system’s T cells, which learn to recognize the drug molecules as invaders. Later, if heroin or its metabolites are detected in the blood, the T cells will “remember” the invaders and remove them.
In rats, the three-pronged vaccine generated high numbers of antibodies against the drug and its metabolites, blocking heroin’s action on the brain. Once vaccinated, the formerly addicted rats were unable to get high, even when injected with extremely high doses of the drug, Janda’s group reported in 2013 in the Proceedings of the National Academy of Sciences. The result was decreased drug-seeking behavior in the vaccinated rats. By contrast, control rats, and those vaccinated only against morphine, continued to seek higher doses of the drug.

The vaccine showed similar effectiveness in nonhuman primates, Janda reported in May at the American Psychiatric Association’s annual meeting in Atlanta. In addition, the vaccine is specific to heroin metabolites, not other opiates. A vaccine that’s too broad could potentially make patients immune to the effects of all prescription opioids, leaving them vulnerable if they become injured and need pain relief.

Janda’s team recently tested another antiopioid vaccine in animals, one that arms the body against fentanyl. When given to mice, the vaccine trained the animals’ immune systems to generate antibodies that bind to fentanyl and prevent it from traveling to the brain from the bloodstream. The results, published March 7 in Angewandte Chemie, showed that in mice, the antibodies neutralized high levels of the drug — more than 30 times a normal dose — for months after a series of three shots. By blocking the effects of the drug and its high, the vaccine could potentially curb drug-seeking behavior.

Another group is going after heroin and its strong tie to high HIV infection rates worldwide. Scientists at the Walter Reed Army Institute of Research are developing a dual-purpose vaccine, called H2, to treat heroin addiction while preventing HIV infection.

Biochemist Gary Matyas and his group at Walter Reed first designed a vaccine to stimulate antibodies against heroin. Similar to Janda’s antiheroin vaccine, haptens are bound to a protein carrier, spurring the immune system to create high levels of antibody to bind heroin and its metabolites in the blood and prevent it from crossing the blood-brain barrier. Users will then experience no euphoria or addictive reactions.

The researchers plan to combine the heroin vaccine with an HIV vaccine, a combination that’s much trickier to develop. Scientists have long been frustrated by the ability of the AIDS virus to mutate and evade the immune system. The virus constantly changes the makeup of the proteins on its surface so that antibodies have difficulty recognizing and attacking it. But researchers have found that targeting a region called V2 on the surface of the virus decreased the risk of HIV infection.

The vaccine, tested in volunteers in Thailand by the country’s Ministry of Public Health and Walter Reed scientists, protected about a third of participants against HIV infection, according to a 2009 report.

There’s no timeline for moving the H2 vaccine into human trials, Matyas says. His hope is that the vaccine will concurrently address the entwined epidemics. “If you can reduce heroin use, you can reduce the spread of HIV,” he says. “That’s why we’re focusing on both heroin and HIV in one vaccine.”

Extra help
While vaccines can’t be the only treatment for the opioid epidemic, they could offer users who want to abstain an additional and much needed option to deal with addiction. It’s not unusual for people to relapse, or to require more than one type of treatment, before finding a course of recovery that suits them, Volkow says.
Treating addiction like a disease that needs to be managed, such as diabetes or high blood pressure, with a multiplicity of treatment options would help addicts find a treatment that works well for them over the long haul, she says.

“Addiction is an extremely serious disease, with a high mortality rate and devastating consequences,” Volkow says. “We need to treat it very aggressively, and we need to have a variety of interventions so if one doesn’t work we have something else to offer the patient.”

Because relapse is common in addiction, Janda says he thinks that the antidrug vaccines’ value will come in helping people who want to abstain, but might falter in a weak moment. “Even if they try to do the drug, they’re not going to get the reward effects of the drug,” he says. “That means that they won’t spiral out of control and have to start all over again.”

Kathy Schnur, Patrick’s mother, remembers how, years into her son’s treatment, when the conversation turned to heroin — its euphoric high and mysterious spell — her son would confess to a desire to taste the drug “one more time.” A heroin vaccine would have taken a relapse off the table, she says. He would no longer have needed to make a daily decision to stay clean.

“If he knew he couldn’t get what he expected from the drug, it would remain a nonevent,” Schnur says. “Or, if he slipped up and tried it just one more time, the vaccine would prevent an overdose.”

Nail-biting and thumb-sucking may not be all bad

There are plenty of reasons to tell kids not to bite their nails or suck their thumbs. Raw fingernail areas pick up infection, and thumbs can eventually move teeth into the wrong place. Not to mention these habits slop spit everywhere. But these bad habits might actually good for something: Kids who sucked their thumbs or chewed their nails had lower rates of allergic reactions in lab tests, a new study finds.

The results come from a group of more than 1,000 children in New Zealand. When the kids were ages 5, 7, 9 and 11, their parents were asked if the kids sucked their thumbs or bit their nails. At age 13, the kids came into a clinic for an allergen skin prick test. That’s a procedure in which small drops of common allergens such as pet dander, wool, dust mites and fungus are put into a scratch on the skin to see if they elicit a reaction.

Kids whose parents said “certainly” to the question of thumb-sucking or nail-biting were less likely to react to allergens in the skin prick test, respiratory doctor Robert Hancox of the University of Otago in New Zealand and colleagues report July 11 in Pediatrics. And this benefit seemed to last. The childhood thumb-suckers and nail-biters still had fewer allergic reactions at age 32.

The results fit with other examples of the benefits of germs. Babies whose parents cleaned dirty pacifiersby popping them into their own mouths were more protected against allergies. And urban babies exposed to roaches, mice and cats had fewer allergies, too. These scenarios all get more germs in and on kids’ bodies. And that may be a good thing. An idea called the hygiene hypothesis holds that exposure to germs early in life can train the immune system to behave itself, preventing overreactions that may lead to allergies and asthma.

It might be the case that germy mouths bring benefits, but only when kids are young. Hancox and his colleagues don’t know when the kids in their study first started sucking thumbs or biting nails, but having spent time around little babies, I’m guessing it was pretty early.

So does this result mean that parents shouldn’t discourage — or even encourage — these habits? Hancox demurs. “We don’t have enough evidence to suggest that parents change what they do,” he says. Still, the results may offer some psychological soothing, he says. “Perhaps if children have habits that are difficult to break, there is some consolation for parents that there might be a reduced risk of developing allergy.”

Tabby’s star drama continues

A star that made headlines for its bizarre behavior has got one more mystery for astronomers to ponder.

Tabby’s star, also known as KIC 8462852, has been inexplicably flickering and fading. The Kepler Space Telescope caught two dramatic drops in light — by up to 22 percent — spaced nearly two years apart. Photographs from other telescopes dating back to 1890 show that the star also faded by roughly 20 percent over much of the last century. Possible explanations for the behavior range from mundane comet swarms to fantastical alien engineering projects (SN Online: 2/2/16).
A new analysis of data from Kepler, NASA’s premier planet hunter, shows that Tabby’s star steadily darkened throughout the telescope’s primary four-year mission. That’s in addition to the abrupt flickers already seen during the same time period. Over the first 1,100 days, the star dimmed by nearly 1 percent. Then the light dropped another 2.5 percent over the following six months before leveling off during the mission’s final 200 days.

Astronomers Benjamin Montet of Caltech and Josh Simon of the Observatories of the Carnegie Institution of Washington in Pasadena, Calif., report the findings online August 4 at arXiv.org.

The new data support a previous claim that the star faded between 1890 and 1989, a claim that some researchers questioned. “It’s just getting stranger,” says Jason Wright, an astronomer at Penn State University. “This is a third way in which the star is weird. Not only is it getting dimmer, it’s doing so at different rates.”
The slow fading hadn’t been noticed before because data from Kepler are processed to remove long-term trends that might confuse planet-finding algorithms. To find the dimming, Montet and Simon analyzed images from the telescope that are typically used only to calibrate data.
“Their analysis is very thorough,” says Tabetha Boyajian, an astronomer at Yale University who in 2015 reported the two precipitous drops in light (and for whom the star is nicknamed). “I see no flaws in that at all.”

While the analysis is an important clue, it doesn’t yet explain the star’s erratic behavior. “It doesn’t push us in any direction because it’s nothing that we’ve ever encountered before,” says Boyajian. “I’ve said ‘I don’t know’ so many times at this point.”

An object (or objects) moving in front of the star and blocking some of the light is still the favored explanation — though no one has figured out what that object is. The drop in light roughly 1,100 days into Kepler’s mission is reminiscent of a planet crossing in front of a star, Montet says. But given how slowly the light dropped, such a planet (or dim star) would have to live on an orbit more than 60 light-years across. The odds of catching a body on such a wide, slow orbit as it passed in front of the star are so low, says Montet, that you would need 10,000 Kepler missions to see just one. “We figure that’s pretty unlikely.”

An interstellar cloud wandering between Earth and KIC 8462852 is also unlikely, Wright says. “If the interstellar medium had these sorts of clumps and knots, it should be a ubiquitous phenomenon. We would have known about this for decades.” While some quasars and pulsars appear to flicker because of intervening material, the variations are minute and nothing like the 20 percent dips seen in Tabby’s star.

A clump of gas and dust orbiting the star — possibly produced by a collision between comets — is a more likely candidate, although that doesn’t explain the century-long dimming. “Nothing explains all the effects we see,” says Montet.

Given the star’s unpredictable nature, astronomers need constant vigilance to solve this mystery. The American Association of Variable Star Observers is working with amateur astronomers to gather continuous data from backyard telescopes around the globe. Boyajian and colleagues are preparing to monitor KIC 8462852 with the Las Cumbres Observatory Global Telescope Network, a worldwide web of telescopes that can keep an incessant eye on the star. “At this point, that’s the only thing that’s going to help us figure out what it is,” she says.

Trio wins physics Nobel for math underlying exotic states of matter

The 2016 Nobel Prize in physics is awarded for discoveries of exotic states of matter known as topological phases that can help explain phenomena such as superconductivity.

The prize is shared among three researchers: David J. Thouless, of the University of Washington in Seattle, F. Duncan M. Haldane of Princeton University and J. Michael Kosterlitz of Brown University. The Royal Swedish Academy of Sciences announced the prize October 4.

At the heart of their work is topology, a branch of mathematics that describes steplike changes in a property. An object can have zero, one or two holes, for example, but not half a hole. This year’s Nobel laureates found that topological effects could explain behaviors seen in superconductors and superfluids. “Like most discoveries, you stumble onto them and you just come to realize there is something really interesting there,” Haldane said in a phone call during the announcement.

In some ways, hawks hunt like humans

A hunter’s gaze betrays its strategy. And looking at what an animal looks at when it’s hunting for prey has revealed foraging patterns in humans, other primates — and now, birds.

Suzanne Amador Kane of Haverford College in Pennsylvania and her colleagues watched archival footage of three raptor species hunting: northern goshawks (Accipiter gentilis), Cooper’s hawks (A. cooperii) and red-tailed hawks (Buteo jamaicensis). They also mounted a video camera to the head of a goshawk to record the bird’s perspective (a technique that’s proved useful in previous studies of attack behavior). The team noted how long birds spent fixating on specific points before giving up, moving their head and, thus, shifting their gaze.

When searching for prey, raptors don’t turn their heads in a predictable pattern. Instead, they appear to scan and fixate randomly based on what they see in their environment, Kane and her colleagues report November 16 in The Auk. In primates, a buildup of sensory information drives foraging animals to move their eyes in similar patterns.

Though the new study only examines three species and focuses on head tracking rather than eye tracking, Kane and her colleagues suggest that the same basic neural processes may drive search decisions of human and hawk hunters.

Ice gave Pluto a heavy heart

Pluto’s heart might carry a heavy burden.

Weight from massive deposits of frozen nitrogen, methane and carbon monoxide, built up billions of years ago, could have carved out the left half of the dwarf planet’s heart-shaped landscape, researchers report online November 30 in Nature.

The roughly 1,000-kilometer-wide frozen basin dubbed Sputnik Planitia was on display when the New Horizons spacecraft tore past in July 2015 (SN: 12/26/15, p. 16). Previous studies have proposed that the region could be a scar left by an impact with interplanetary debris (SN: 12/12/15, p. 10).

Sputnik Planitia sits in a cold zone, a prime location for ice to build up, planetary scientist Douglas Hamilton of the University of Maryland in College Park and colleagues calculate. Excess ice deposited early in the planet’s history would have led to a surplus of mass. Gravitational interactions between Pluto and its largest moon, Charon, slowed the planet’s rotation until that mass faced in the opposite direction from Charon. Once Charon became synced to Pluto’s rotation — it’s always over the same spot on Pluto — gravity would have held Sputnik Planitia in Pluto’s cold zone, attracting even more ice. As the ice cap grew, the weight could have depressed Pluto’s surface, creating the basin that exists today.

How scientists are hunting for a safer opioid painkiller

An opioid epidemic is upon us. Prescription painkillers such as fentanyl and morphine can ease terrible pain, but they can also cause addiction and death. The Centers for Disease Control and Prevention estimates that nearly 2 million Americans are abusing or addicted to prescription opiates. Politicians are attempting to stem the tide at state and national levels, with bills to change and monitor how physicians prescribe painkillers and to increase access to addiction treatment programs.

Those efforts may make access to painkillers more difficult for some. But pain comes to everyone eventually, and opioids are one of the best ways to make it go away.

Morphine is the king of pain treatment. “For hundreds of years people have used morphine,” says Lakshmi Devi, a pharmacologist at the Ichan School of Medicine Mount Sinai in New York City. “It works, it’s a good drug, that’s why we want it. The problem is the bad stuff.”

The “bad stuff” includes tolerance — patients have to take higher and higher doses to relieve their pain. Drugs such as morphine depress breathing, an effect that can prove deadly. They also cause constipation, drowsiness and vomiting. But “for certain types of pain, there are no medications that are as effective,” says Bryan Roth, a pharmacologist and physician at the University of North Carolina at Chapel Hill. The trick is constructing a drug with all the benefits of an opioid painkiller, and few to none of the side effects. Here are three ways that scientists are searching for the next big pain buster, and three of the chemicals they’ve turned up.

Raid the chemical library
To find new options for promising drugs, scientists often look to chemical libraries of known molecules. “A pharmaceutical company will have libraries of a few million compounds,” Roth explains. Researchers comb through these libraries trying to find those compounds that connect to specific molecules in the body and brain.

When drugs such as morphine enter the brain, they bind to receptors on the outside of cells and cause cascades of chemical activity inside. Opiate drugs bind to three types of opiate receptors: mu, kappa and delta. The mu receptor type is the one associated with the pain-killing — and pleasure-causing — activities of opiates. Activation of this receptor type spawns two cascades of chemical activity. One, the Gi pathway, is associated with pain relief. The other — known as the beta-arrestin pathway — is associated with slowed breathing rate and constipation. So a winning candidate molecule would be one that triggered only the Gi pathway, without triggering beta-arrestin.
Roth and colleagues set out to find a molecule that fit those specifications. But instead of the intense, months-long process of experimentally screening molecules in a chemical library, Roth’s group chose a computational approach, screening more than 3 million compounds in a matter of days. The screen narrowed the candidates down to 23 molecules to test the old fashioned way — both chemically and in mice. Each of these potential painkillers went through even more tests to find those with the strongest bond to the receptor and the highest potency.

In the end, the team focused on a chemical called PMZ21. It activates only the pathway associated with pain relief, and is an effective painkiller in mice. It does not depress breathing rate, and it might even avoid some of the addictive potential of other opiates, though Roth notes that further studies need to be done. He and his colleagues published their findings September 8 in Nature.

Letting the computer handle the initial screen is “a smart way of going about it,” notes Nathanial Jeske, a neuropharmacologist at the University of Texas Health Science Center in San Antonio. But mice are only the first step. “I’m interested to see if the efficacy applies to different animals.”

Making an opiate 2.0
Screening millions of compounds is one way to find a new drug. But why buy new when you can give a chemical makeover to something you already have? This is a “standard medicinal chemistry approach,” Roth says: “Pick a known drug and make analogs [slightly tweaked structures], and that can work.”

That was the approach that Mei-Chuan Ko and his group at Wake Forest University School of Medicine in Winston-Salem, N.C., decided to take with the common opioid painkiller buprenorphine. “Compared to morphine or fentanyl, buprenorphine is safer,” Ko explains, “but it has abuse liability. Physicians still have concerns about the abuse and won’t prescribe it.” Buprenorphine is what’s called a partial agonist at the mu receptor — it can’t fully activate the receptor, even at the highest doses. So it’s an effective painkiller that is harder to overdose on — so much so that it’s used to treat addiction to other opiates. But it can still cause a high, so doctors still worry about people abusing the drug.

So to make a version of buprenorphine with lower addictive potential, Ko and his colleagues focused on a chemical known as BU08028. It’s structurally similar to buprenorphine, but it also hits another type of opioid receptor called the nociceptin-orphanin FQ peptide (or NOP) receptor.

The NOP receptor is not a traditional target. This is partially because its effect in rodents — usually the first recipients of a new drug — is “complicated,” says Ko. “It does kill pain at high doses but not at low doses.” In primates, however, it’s another matter. In tests in four monkeys, BU08028 killed pain effectively at low doses and didn’t suppress breathing. The monkeys also showed little interest in taking the drug voluntarily, which suggests it might not be as addictive as classic opioid drugs. Ko and his colleagues published their results in the Sept. 13 Proceedings of the National Academy of Sciences.*

Off the beaten path
Combing through chemical libraries or tweaking drugs that are already on the market takes advantage of systems that are already well-established. But sometimes, a tough question requires an entirely new approach. “You can either target the receptors you know and love … or you can do the complete opposite and see if there’s a new receptor system,” Devi says.

Jeske and his group chose the latter option. Of the three opiate receptor types — mu, kappa and delta — most drugs (and drug studies) focus on the mu receptor. Jeske’s group chose to investigate delta instead. They were especially interested in targeting delta receptors in the body — far away from the brain and its side effects.

The delta receptor has an unfortunate quirk. When activated by a drug, it can help kill pain. But most of the time, it can’t be activated at all. The receptor is protected — bound up tight by another molecule — and only released when an area is injured. So Jeske’s goal was to find out what was binding up the delta receptor, and figure out how to get rid of it.

Working in rat neurons, Jeske and his group found that when a molecule called GRK2 was around, the delta receptor was inactive. “Knock down GRK2 and the receptor works just fine,” Jeske says. By genetically knocking out GRK2 in rats, Jeske and his group left the delta receptor free to respond to a drug — and to prevent pain. The group published their results September 6 in Cell Reports.

It’s “a completely new target and that’s great,” says Devi. “But that new target with a drug is a tall order.” A single drug is unlikely to be able to both push away GRK2 and then activate the delta receptor to stop pain.

Jeske agrees that a single molecule probably couldn’t take on both roles. Instead, one drug to get rid of GRK2 would be given first, followed by another to activate the delta receptors.

Each drug development method has unearthed drug candidates with early promise. “We’ve solved these problems in mice and rats many times,” Devi notes. But whether sifting through libraries, tweaking older drugs or coming up with entirely new ones, the journey to the clinic has only just begun.

*Paul Czoty and Michael Nader, two authors on the PNAS paper, were on my Ph.D. dissertation committee. I have had neither direct nor indirect involvement with this research.

Evidence falls into place for once and future supercontinents

Look at any map of the Atlantic Ocean, and you might feel the urge to slide South America and Africa together. The two continents just beg to nestle next to each other, with Brazil’s bulge locking into West Africa’s dimple. That visible clue, along with several others, prompted Alfred Wegener to propose over a century ago that the continents had once been joined in a single enormous landmass. He called it Pangaea, or “all lands.”

Today, geologists know that Pangaea was just the most recent in a series of mighty super-continents. Over hundreds of millions of years, enormous plates of Earth’s crust have drifted together and then apart. Pangaea ruled from roughly 400 million to about 200 million years ago. But wind the clock further back, and other supercontinents emerge. Between 1.3 billion and 750 million years ago, all the continents amassed in a great land known as Rodinia. Go back even further, about 1.4 billion years or more, and the crustal shards had arranged themselves into a supercontinent called Nuna.

Using powerful computer programs and geologic clues from rocks around the world, researchers are painting a picture of these long-lost worlds. New studies of magnetic minerals in rock from Brazil, for instance, are helping pin the ancient Amazon to a spot it once occupied in Nuna. Other recent research reveals the geologic stresses that finally pulled Rodinia apart, some 750 million years ago. Scientists have even predicted the formation of the next supercontinent — an amalgam of North America and Asia, evocatively named Amasia — some 250 million years from now.
Reconstructing supercontinents is like trying to assemble a 1,000-piece jigsaw puzzle after you’ve lost a bunch of the pieces and your dog has chewed up others. Still, by figuring out which puzzle pieces went where, geologists have been able to illuminate some of earth science’s most fundamental questions.
For one thing, continental drift, that gradual movement of landmasses across Earth’s surface, profoundly affected life by allowing species to move into different parts of the world depending on what particular landmasses happened to be joined. (The global distribution of dinosaur fossils is dictated by how continents were assembled when those great animals roamed.)

Supercontinents can also help geologists hunting for mineral deposits — imagine discovering gold ore of a certain age in the Amazon and using it to find another gold deposit in a distant landmass that was once joined to the Amazon. More broadly, shifting landmasses have reshaped the face of the planet — as they form, supercontinents push up mountains like the Appalachians, and as they break apart, they create oceans like the Atlantic.

“The assembly and breakup of these continents have profoundly influenced the evolution of the whole Earth,” says Johanna Salminen, a geophysicist at the University of Helsinki in Finland.
Push or pull
For centuries, geologists, biogeographers and explorers have tried to explain various features of the natural world by invoking lost continents. Some of the wilder concepts included Lemuria, a sunken realm between Madagascar and India that offered an out-there rationale for the presence of lemurs and lemurlike fossils in both places, and Mu, an underwater land supposedly described in ancient Mayan manuscripts. While those fantastic notions have fallen out of favor, scientists are exploring the equally mind-bending story of the supercontinents that actually existed.
Earth’s constantly shifting jigsaw puzzle of continents and oceans traces back to the fundamental forces of plate tectonics. The story begins in the centers of oceans, where hot molten rock wells up from deep inside the Earth along underwater mountain chains. The lava cools and solidifies into newborn ocean crust, which moves continually away from either side of the mountain ridge as if carried outward on a conveyor belt. Eventually, the moving ocean crust bumps into a continent, where it either stalls or begins diving beneath that continental crust in a process called subduction.

Those competing forces — pushing newborn crust away from the mid-ocean mountains and pulling older crust down through subduction — are constantly rearranging Earth’s crustal plates. That’s why North America and Europe are getting farther away from each other by a few centimeters each year as the Atlantic widens, and why the Pacific Ocean is shrinking, its seafloor sucked down by subduction along the Ring of Fire — looping from New Zealand to Japan, Alaska and Chile.

By running the process backward in time, geologists can begin to see how oceans and continents have jockeyed for position over millions of years. Computers calculate how plate positions shifted over time, based on the movements of today’s plates as well as geologic data that hint at their past locations.

Those geologic clues — such as magnetic minerals in ancient rocks — are few and far between. But enough remain for researchers to start to cobble together the story of which crustal piece went where.

“To solve a jigsaw puzzle, you don’t necessarily need 100 percent of the pieces before you can look at it and say it’s the Mona Lisa,” says Brendan Murphy, a geophysicist at St. Francis Xavier University in Antigonish, Nova Scotia. “But you need some key pieces.” He adds: “With the eyes and nose, you have a chance.”

No place like Nuna
For ancient Nuna, scientists are starting to find the first of those key pieces. They may not reveal the Mona Lisa’s enigmatic smile, but they are at least starting to fill in a portrait of a long-vanished supercontinent.

Nuna came together starting around 2 billion years ago, with its heart a mash-up of Baltica (the landmass that today contains Scandinavia), Laurentia (which is now much of North America) and Siberia. Geologists argue over many things involving this first supercontinent, starting with its name. “Nuna” is from the Inuktitut language of the Arctic. It means lands bordering the northern oceans, so dubbed for the supercontinent’s Arctic-fringing components. But some researchers prefer to call it Columbia after the Columbia region of North America’s Pacific Northwest.

Whatever its moniker, Nuna/Columbia is an exercise in trying to get all the puzzle pieces to fit. Because Nuna existed so long ago, subduction has recycled many rocks of that age back into the deep Earth, erasing any record of what they were doing at the time. Geologists travel to rocks that remain in places like India, South America and North China, analyzing them for clues to where they were at the time of Nuna.

One of the most promising techniques targets magnetic minerals. Paleomagnetic studies use the minerals as tiny time capsule compasses, which recorded the direction of the magnetic field at the time the rocks formed. The minerals can reveal information about where those rocks used to be, including their latitude relative to where the Earth’s north magnetic pole was at the time.

Salminen has been gathering paleomagnetic data from Nuna-age rocks in Brazil and western Africa. Not surprisingly, given their current lock-and-key configuration, these two chunks were once united as a single ancient continental block, known as the Congo/São Francisco craton. For millions of years, it shuffled around as a single geologic unit, occasionally merging with other blocks and then later splitting away.

Salminen has now figured out where the Congo/São Francisco puzzle piece fit in the jigsaw that made up Nuna. In 1.5-billion-year-old rocks in Brazil, she unearthed magnetic clues that placed the Congo/São Francisco craton at the southeastern tip of Baltica all those years ago. She and her colleagues reported the findings in November in Precambrian Research.

It is the first time scientists have gotten paleomagnetic information about where the craton may have been as far back as Nuna. “This is quite remarkable — it was really needed,” she says. “Now we can say Congo could have been there.” Like building out a jigsaw puzzle from its center, the work essentially expands Nuna’s core.

Rodinia’s radioactive decay
By around 1.3 billion years ago, Nuna was breaking apart, the pieces of the Mona Lisa face shattering and drifting away from each other. It took another 200 million years before they rejoined in the configuration known as Rodinia.

Recent research suggests that Rodinia may not have looked much different than Nuna, though. The Mona Lisa in its second incarnation may still have looked like the portrait of a woman — just maybe with a set of earrings dangling from her lobes.
of Carleton University in Ottawa, Canada, recently explored the relative positions of Laurentia and Siberia between 1.9 billion and 720 million years ago, a period that spans both Nuna and Rodinia. Ernst’s group specializes in studying “large igneous provinces” — the huge outpourings of lava that build up over millions of years. Often the molten rock flows along sheetlike structures known as dikes, which funnel magma from deep in the Earth upward.

By using the radioactive decay of elements in the dike rock, such as uranium decaying to lead, scientists can precisely date when a dike formed. With enough dates on a particular dike, researchers can produce a sort of bar code that is unique to each dike. Later, when the dikes are broken apart and shifted over time, geologists can pinpoint the bar codes that match and thus line up parts of the crust that used to be together.

Ernst’s team found that dikes from Laurentia and Siberia matched during four periods between 1.87 billion and 720 million years ago — suggesting they were connected for that entire span, the team reported in June in Nature Geoscience. Such a long-term relationship suggests that Siberia and Laurentia may have stuck together through the Nuna-Rodinia transition, Ernst says.

Other parts of the puzzle tend to end up in the same relative locations as well, says Joseph Meert, a paleomagnetist at the University of Florida in Gainesville. In each supercontinent, Laurentia, Siberia and Baltica knit themselves together in roughly the same arrangement: Siberia and Baltica nestle like two opposing knobs on one end of Laurentia’s elongated blob. Meert calls these three continental fragments “strange attractors,” since they appear conjoined time after time.

It’s the outer edges of the jigsaw puzzle that change. Fragments like north China and southern Africa end up in different locations around the supercontinent core. “I call those bits the lonely wanderers,” Meert says.

Getting to know Pangaea
While some puzzle-makers try to sort out the reconstructions of past supercontinents, other geologists are exploring deeper questions about why big landmasses come together in the first place. And one place to look is Pangaea.

“Most people would accept what Pangaea looks like,” Murphy says. “But as soon as you start asking why it formed, how it formed and what processes are involved — then all of a sudden you run into problems.”
Around 550 million years ago, subduction zones around the edges of an ancient ocean began dragging that oceanic crust under continental crust. But around 400 million years ago, that subduction suddenly stopped. In a major shift, a different, much younger seafloor began to subduct instead beneath the continents. That young ocean crust kept getting sucked up until it all disappeared, and the continents were left merged in the giant mass of Pangaea.

Imagine in today’s world, if the Pacific stopped shrinking and all of a sudden the Atlantic started shrinking instead. “That’s quite a significant problem,” Murphy says. In unpublished work, he has been exploring the physics of how plates of oceanic and continental crust — which have different densities, buoyancies and other physical characteristics — could have interacted with one another in the run-up to Pangaea.

Supercontinent breakups are similarly complicated. Once all the land amasses in a single big chunk, it cannot stay together forever. In one scenario, its sheer bulk acts as an electric blanket, allowing heat from the deep Earth to pond up beneath it until things get too hot and the supercontinent splinters (SN: 1/21/17, p. 14). In another, physical stressors pull the supercontinent apart.

Peter Cawood, a geologist at the University of St. Andrews in Fife, Scotland, likes the second option. He has been studying mountain ranges that arose when the crustal plates that made up Rodinia collided, pushing up soaring peaks where they met. These include the Grenville mountain-building event of about 1 billion years ago, traces of which linger today in the eroded peaks of the Appalachians. Cawood and his colleagues analyzed the times at which such mountains appeared and put together a detailed timeline of what happened as Rodinia began to break apart.

They note that crustal plates began subducting around the edges of Rodinia right around the time of its breakup. That sucking down of crust caused the supercontinent to be pulled from all directions and eventually break apart, Cawood and his colleagues wrote in Earth and Planetary Science Letters in September. “The timing of major breakup corresponds with this timing of opposing subduction zones,” he says.
The future is Amasia
That stressful situation is similar to what the Pacific Ocean finds itself in today. Because it is flanked by subduction zones around the Ring of Fire, the Pacific Plate is shrinking over time. Some geologists predict that it will vanish entirely in the future, leaving North America and Asia to merge into the next supercontinent, Amasia. Others have devised different possible paths to Amasia, such as closing the Arctic Ocean rather than the Pacific.

“Speculation about the future supercontinent Amasia is exactly that, speculation,” says geologist Ross Mitchell of Curtin University in Perth, Australia, who in 2012 helped describe the mechanics of how Amasia might arise. “But there’s hard science behind the conjecture.”

For instance, Masaki Yoshida of the Japan Agency for Marine-Earth Science and Technology in Yokosuka recently used sophisticated computer models to analyze how today’s continents would continue to move atop the flowing heat of the deep Earth. He combined modern-day plate motions with information on how that internal planetary heat churns in three dimensions, then ran the whole scenario into the future. In a paper in the September Geology, Yoshida describes how North America, Eurasia, Australia and Africa will end up merged in the Northern Hemisphere.

No matter where the continents are headed, they are destined to reassemble. Plate tectonics says it will happen — and a new supercontinent will shape the face of the Earth. It might not look like the Mona Lisa, but it might just be another masterpiece.

There’s still a lot we don’t know about the proton

Nuclear physicist Evangeline Downie hadn’t planned to study one of the thorniest puzzles of the proton.

But when opportunity knocked, Downie couldn’t say no. “It’s the proton,” she exclaims. The mysteries that still swirl around this jewel of the subatomic realm were too tantalizing to resist. The plentiful particles make up much of the visible matter in the universe. “We’re made of them, and we don’t understand them fully,” she says.

Many physicists delving deep into the heart of matter in recent decades have been lured to the more exotic and unfamiliar subatomic particles: mesons, neutrinos and the famous Higgs boson — not the humble proton.
But rather than chasing the rarest of the rare, scientists like Downie are painstakingly scrutinizing the proton itself with ever-higher precision. In the process, some of these proton enthusiasts have stumbled upon problems in areas of physics that scientists thought they had figured out.

Surprisingly, some of the particle’s most basic characteristics are not fully pinned down. The latest measurements of its radius disagree with one another by a wide margin, for example, a fact that captivated Downie. Likewise, scientists can’t yet explain the source of the proton’s spin, a basic quantum property. And some physicists have a deep but unconfirmed suspicion that the seemingly eternal particles don’t live forever — protons may decay. Such a decay is predicted by theories that unite disparate forces of nature under one grand umbrella. But decay has not yet been witnessed.

Like the base of a pyramid, the physics of the proton serves as a foundation for much of what scientists know about the behavior of matter. To understand the intricacies of the universe, says Downie, of George Washington University in Washington, D.C., “we have to start with, in a sense, the simplest system.”

Sizing things up
For most of the universe’s history, protons have been VIPs — very important particles. They formed just millionths of a second after the Big Bang, once the cosmos cooled enough for the positively charged particles to take shape. But protons didn’t step into the spotlight until about 100 years ago, when Ernest Rutherford bombarded nitrogen with radioactively produced particles, breaking up the nuclei and releasing protons.

A single proton in concert with a single electron makes up hydrogen — the most plentiful element in the universe. One or more protons are present in the nucleus of every atom. Each element has a unique number of protons, signified by an element’s atomic number. In the core of the sun, fusing protons generate heat and light needed for life to flourish. Lone protons are also found as cosmic rays, whizzing through space at breakneck speeds, colliding with Earth’s atmosphere and producing showers of other particles, such as electrons, muons and neutrinos.

In short, protons are everywhere. Even minor tweaks to scientists’ understanding of the minuscule particle, therefore, could have far-reaching implications. So any nagging questions, however small in scale, can get proton researchers riled up.

A disagreement of a few percent in measurements of the proton’s radius has attracted intense interest, for example. Until several years ago, scientists agreed: The proton’s radius was about 0.88 femtometers, or 0.88 millionths of a billionth of a meter — about a trillionth the width of a poppy seed.
But that neat picture was upended in the span of a few hours, in May 2010, at the Precision Physics of Simple Atomic Systems conference in Les Houches, France. Two teams of scientists presented new, more precise measurements, unveiling what they thought would be the definitive size of the proton. Instead the figures disagreed by about 4 percent (SN: 7/31/10, p. 7). “We both expected that we would get the same number, so we were both surprised,” says physicist Jan Bernauer of MIT.

By itself, a slight revision of the proton’s radius wouldn’t upend physics. But despite extensive efforts, the groups can’t explain why they get different numbers. As researchers have eliminated simple explanations for the impasse, they’ve begun wondering if the mismatch could be the first hint of a breakdown that could shatter accepted tenets of physics.

The two groups each used different methods to size up the proton. In an experiment at the MAMI particle accelerator in Mainz, Germany, Bernauer and colleagues estimated the proton’s girth by measuring how much electrons’ trajectories were deflected when fired at protons. That test found the expected radius of about 0.88 femtometers (SN Online: 12/17/10).

But a team led by physicist Randolf Pohl of the Max Planck Institute of Quantum Optics in Garching, Germany, used a new, more precise method. The researchers created muonic hydrogen, a proton that is accompanied not by an electron but by a heftier cousin — a muon.

In an experiment at the Paul Scherrer Institute in Villigen, Switzerland, Pohl and collaborators used lasers to bump the muons to higher energy levels. The amount of energy required depends on the size of the proton. Because the more massive muon hugs closer to the proton than electrons do, the energy levels of muonic hydrogen are more sensitive to the proton’s size than ordinary hydrogen, allowing for measurements 10 times as precise as electron-scattering measurements.

Pohl’s results suggested a smaller proton radius, about 0.841 femtometers, a stark difference from the other measurement. Follow-up measurements of muonic deuterium — which has a proton and a neutron in its nucleus — also revealed a smaller than expected size, he and collaborators reported last year in Science. Physicists have racked their brains to explain why the two measurements don’t agree. Experimental error could be to blame, but no one can pinpoint its source. And the theoretical physics used to calculate the radius from the experimental data seems solid.

Now, more outlandish possibilities are being tossed around. An unexpected new particle that interacts with muons but not electrons could explain the difference (SN: 2/23/13, p. 8). That would be revolutionary: Physicists believe that electrons and muons should behave identically in particle interactions. “It’s a very sacred principle in theoretical physics,” says John Negele, a theoretical particle physicist at MIT. “If there’s unambiguous evidence that it’s been broken, that’s really a fundamental discovery.”

But established physics theories die hard. Shaking the foundations of physics, Pohl says, is “what I dream of, but I think that’s not going to happen.” Instead, he suspects, the discrepancy is more likely to be explained through minor tweaks to the experiments or the theory.

The alluring mystery of the proton radius reeled Downie in. During conversations in the lab with some fellow physicists, she learned of an upcoming experiment that could help settle the issue. The experiment’s founders were looking for collaborators, and Downie leaped on the bandwagon. The Muon Proton Scattering Experiment, or MUSE, to take place at the Paul Scherrer Institute beginning in 2018, will scatter both electrons and muons off of protons and compare the results. It offers a way to test whether the two particles behave differently, says Downie, who is now a spokesperson for MUSE.

A host of other experiments are in progress or planning stages. Scientists with the Proton Radius Experiment, or PRad, located at Jefferson Lab in Newport News, Va., hope to improve on Bernauer and colleagues’ electron-scattering measurements. PRad researchers are analyzing their data and should have a new number for the proton radius soon.

But for now, the proton’s identity crisis, at least regarding its size, remains. That poses problems for ultrasensitive tests of one of physicists’ most essential theories. Quantum electrodynamics, or QED, the theory that unites quantum mechanics and Albert Einstein’s special theory of relativity, describes the physics of electromagnetism on small scales. Using this theory, scientists can calculate the properties of quantum systems, such as hydrogen atoms, in exquisite detail — and so far the predictions match reality. But such calculations require some input — including the proton’s radius. Therefore, to subject the theory to even more stringent tests, gauging the proton’s size is a must-do task.
Spin doctors
Even if scientists eventually sort out the proton’s size snags, there’s much left to understand. Dig deep into the proton’s guts, and the seemingly simple particle becomes a kaleidoscope of complexity. Rattling around inside each proton is a trio of particles called quarks: one negatively charged “down” quark and two positively charged “up” quarks. Neutrons, on the flip side, comprise two down quarks and one up quark.

Yet even the quark-trio picture is too simplistic. In addition to the three quarks that are always present, a chaotic swarm of transient particles churns within the proton. Evanescent throngs of additional quarks and their antimatter partners, antiquarks, continually swirl into existence, then annihilate each other. Gluons, the particle “glue” that holds the proton together, careen between particles. Gluons are the messengers of the strong nuclear force, an interaction that causes quarks to fervently attract one another.
As a result of this chaos, the properties of protons — and neutrons as well — are difficult to get a handle on. One property, spin, has taken decades of careful investigation, and it’s still not sorted out. Quantum particles almost seem to be whirling at blistering speed, like the Earth rotating about its axis. This spin produces angular momentum — a quality of a rotating object that, for example, keeps a top revolving until friction slows it. The spin also makes protons behave like tiny magnets, because a rotating electric charge produces a magnetic field. This property is the key to the medical imaging procedure called magnetic resonance imaging, or MRI.

But, like nearly everything quantum, there’s some weirdness mixed in: There’s no actual spinning going on. Because fundamental particles like quarks don’t have a finite physical size — as far as scientists know — they can’t twirl. Despite the lack of spinning, the particles still behave like they have a spin, which can take on only certain values: integer multiples of 1/2.

Quarks have a spin of 1/2, and gluons a spin of 1. These spins combine to help yield the proton’s total spin. In addition, just as the Earth is both spinning about its own axis and orbiting the sun, quarks and gluons may also circle about the proton’s center, producing additional angular momentum that can contribute to the proton’s total spin.

Somehow, the spin and orbital motion of quarks and gluons within the proton combine to produce its spin of 1/2. Originally, physicists expected that the explanation would be simple. The only particles that mattered, they thought, were the proton’s three main quarks, each with a spin of 1/2. If two of those spins were oriented in opposite directions, they could cancel one another out to produce a total spin of 1/2. But experiments beginning in the 1980s showed that “this picture was very far from true,” says theoretical high-energy physicist Juan Rojo of Vrije University Amsterdam. Surprisingly, only a small fraction of the spin seemed to be coming from the quarks, befuddling scientists with what became known as the “spin crisis” (SN: 9/6/97, p. 158). Neutron spin was likewise enigmatic.

Scientists’ next hunch was that gluons contribute to the proton’s spin. “Verifying this hypothesis was very difficult,” Rojo says. It required experimental studies at the Relativistic Heavy Ion Collider, RHIC, a particle accelerator at Brookhaven National Laboratory in Upton, N.Y.

In these experiments, scientists collided protons that were polarized: The two protons’ spins were either aligned or pointed in opposite directions. Researchers counted the products of those collisions and compared the results for aligned and opposing spins. The results revealed how much of the spin comes from gluons. According to an analysis by Rojo and colleagues, published in Nuclear Physics B in 2014, gluons make up about 35 percent of the proton’s spin. Since the quarks make up about 25 percent, that leaves another 40 percent still unaccounted for.

“We have absolutely no idea how the entire spin is made up,” says nuclear physicist Elke-Caroline Aschenauer of Brookhaven. “We maybe have understood a small fraction of it.” That’s because each quark or gluon carries a certain fraction of the proton’s energy, and the lowest energy quarks and gluons cannot be spotted at RHIC. A proposed collider, called the Electron-Ion Collider (location to be determined), could help scientists investigate the neglected territory.

The Electron-Ion Collider could also allow scientists to map the still-unmeasured orbital motion of quarks and gluons, which may contribute to the proton’s spin as well.
An unruly force
Experimental physicists get little help from theoretical physics when attempting to unravel the proton’s spin and its other perplexities. “The proton is not something you can calculate from first principles,” Aschenauer says. Quantum chromo-dynamics, or QCD — the theory of the quark-corralling strong force transmitted by gluons — is an unruly beast. It is so complex that scientists can’t directly solve the theory’s equations.

The difficulty lies with the behavior of the strong force. As long as quarks and their companions stick relatively close, they are happy and can mill about the proton at will. But absence makes the heart grow fonder: The farther apart the quarks get, the more insistently the strong force pulls them back together, containing them within the proton. This behavior explains why no one has found a single quark in isolation. It also makes the proton’s properties especially difficult to calculate. Without accurate theoretical calculations, scientists can’t predict what the proton’s radius should be, or how the spin should be divvied up.
To simplify the math of the proton, physicists use a technique called lattice QCD, in which they imagine that the world is made of a grid of points in space and time (SN: 8/7/04, p. 90). A quark can sit at one point or another in the grid, but not in the spaces in between. Time, likewise, proceeds in jumps. In such a situation, QCD becomes more manageable, though calculations still require powerful supercomputers.

Lattice QCD calculations of the proton’s spin are making progress, but there’s still plenty of uncertainty. In 2015, theoretical particle and nuclear physicist Keh-Fei Liu and colleagues calculated the spin contributions from the gluons, the quarks and the quarks’ angular momentum, reporting the results in Physical Review D. By their calculation, about half of the spin comes from the quarks’ motion within the proton, about a quarter from the quarks’ spin, with the last quarter or so from the gluons. The numbers don’t exactly match the experimental measurements, but that’s understandable — the lattice QCD numbers are still fuzzy. The calculation relies on various approximations, so it “is not cast in stone,” says Liu, of the University of Kentucky in Lexington.

Death of a proton
Although protons seem to live forever, scientists have long questioned that immortality. Some popular theories predict that protons decay, disintegrating into other particles over long timescales. Yet despite extensive searches, no hint of this demise has materialized.

A class of ideas known as grand unified theories predict that protons eventually succumb. These theories unite three of the forces of nature, creating a single framework that could explain electromagnetism, the strong nuclear force and the weak nuclear force, which is responsible for certain types of radioactive decay. (Nature’s fourth force, gravity, is not yet incorporated into these models.) Under such unified theories, the three forces reach equal strengths at extremely high energies. Such energetic conditions were present in the early universe — well before protons formed — just a trillionth of a trillionth of a trillionth of a second after the Big Bang. As the cosmos cooled, those forces would have separated into three different facets that scientists now observe.
“We have a lot of circumstantial evidence that something like unification must be happening,” says theoretical high-energy physicist Kaladi Babu of Oklahoma State University in Stillwater. Beyond the appeal of uniting the forces, grand unified theories could explain some curious coincidences of physics, such as the fact that the proton’s electric charge precisely balances the electron’s charge. Another bonus is that the particles in grand unified theories fill out a family tree, with quarks becoming the kin of electrons, for example.

Under these theories, a decaying proton would disintegrate into other particles, such as a positron (the antimatter version of an electron) and a particle called a pion, composed of a quark and an antiquark, which itself eventually decays. If such a grand unified theory is correct and protons do decay, the process must be extremely rare — protons must live a very long time, on average, before they break down. If most protons decayed rapidly, atoms wouldn’t stick around long either, and the matter that makes up stars, planets — even human bodies — would be falling apart left and right.

Protons have existed for 13.8 billion years, since just after the Big Bang. So they must live exceedingly long lives, on average. But the particles could perish at even longer timescales. If they do, scientists should be able to monitor many particles at once to see a few protons bite the dust ahead of the curve (SN: 12/15/79, p. 405). But searches for decaying protons have so far come up empty.

Still, the search continues. To hunt for decaying protons, scientists go deep underground, for example, to a mine in Hida, Japan. There, at the Super-Kamiokande experiment (SN: 2/18/17, p. 24), they monitor a giant tank of water — 50,000 metric tons’ worth — waiting for a single proton to wink out of existence. After watching that water tank for nearly two decades, the scientists reported in the Jan. 1 Physical Review D that protons must live longer than 1.6 × 1034 years on average, assuming they decay predominantly into a positron and a pion.

Experimental limits on the proton lifetime “are sort of painting the theorists into a corner,” says Ed Kearns of Boston University, who searches for proton decay with Super-K. If a new theory predicts a proton lifetime shorter than what Super-K has measured, it’s wrong. Physicists must go back to the drawing board until they come up with a theory that agrees with Super-K’s proton-decay drought.

Many grand unified theories that remain standing in the wake of Super-K’s measurements incorporate supersymmetry, the idea that each known particle has another, more massive partner. In such theories, those new particles are additional pieces in the puzzle, fitting into an even larger family tree of interconnected particles. But theories that rely on supersymmetry may be in trouble. “We would have preferred to see supersymmetry at the Large Hadron Collider by now,” Babu says, referring to the particle accelerator located at the European particle physics lab, CERN, in Geneva, which has consistently come up empty in supersymmetry searches since it turned on in 2009 (SN: 10/1/16, p. 12).

But supersymmetric particles could simply be too massive for the LHC to find. And some grand unified theories that don’t require supersymmetry still remain viable. Versions of these theories predict proton lifetimes within reach of an upcoming generation of experiments. Scientists plan to follow up Super-K with Hyper-K, with an even bigger tank of water. And DUNE, the Deep Underground Neutrino Experiment, planned for installation in a former gold mine in Lead, S.D., will use liquid argon to detect protons decaying into particles that the water detectors might miss.
If protons do decay, the universe will become frail in its old age. According to Super-K, sometime well after its 1034 birthday, the cosmos will become a barren sea of light. Stars, planets and life will disappear. If seemingly dependable protons give in, it could spell the death of the universe as we know it.

Although protons may eventually become extinct, proton research isn’t going out of style anytime soon. Even if scientists resolve the dilemmas of radius, spin and lifetime, more questions will pile up — it’s part of the labyrinthine task of studying quantum particles that multiply in complexity the closer scientists look. These deeper studies are worthwhile, says Downie. The inscrutable proton is “the most fundamental building block of everything, and until we understand that, we can’t say we understand anything else.”