How fingerprints form was a mystery — until now

Scientists have finally figured out how those arches, loops and whorls formed on your fingertips.

While in the womb, fingerprint-defining ridges expand outward in waves starting from three different points on each fingertip. The raised skin arises in a striped pattern thanks to interactions between three molecules that follow what’s known as a Turing pattern, researchers report February 9 in Cell. How those ridges spread from their starting sites — and merge — determines the overarching fingerprint shape.
Fingerprints are unique and last for a lifetime. They’ve been used to identify individuals since the 1800s. Several theories have been put forth to explain how fingerprints form, including spontaneous skin folding, molecular signaling and the idea that ridge pattern may follow blood vessel arrangements.

Scientists knew that the ridges that characterize fingerprints begin to form as downward growths into the skin, like trenches. Over the few weeks that follow, the quickly multiplying cells in the trenches start growing upward, resulting in thickened bands of skin.

Since budding fingerprint ridges and developing hair follicles have similar downward structures, researchers in the new study compared cells from the two locations. The team found that both sites share some types of signaling molecules — messengers that transfer information between cells — including three known as WNT, EDAR and BMP. Further experiments revealed that WNT tells cells to multiply, forming ridges in the skin, and to produce EDAR, which in turn further boosts WNT activity. BMP thwarts these actions.

To examine how these signaling molecules might interact to form patterns, the team adjusted the molecules’ levels in mice. Mice don’t have fingerprints, but their toes have striped ridges in the skin comparable to human prints. “We turn a dial — or molecule — up and down, and we see the way the pattern changes,” says developmental biologist Denis Headon of the University of Edinburgh.

Increasing EDAR resulted in thicker, more spaced-out ridges, while decreasing it led to spots rather than stripes. The opposite occurred with BMP, since it hinders EDAR production.

That switch between stripes and spots is a signature change seen in systems governed by Turing reaction-diffusion, Headon says. This mathematical theory, proposed in the 1950s by British mathematician Alan Turing, describes how chemicals interact and spread to create patterns seen in nature (SN: 7/2/10). Though, when tested, it explains only some patterns (SN: 1/21/14).

Mouse digits, however, are too tiny to give rise to the elaborate shapes seen in human fingerprints. So, the researchers used computer models to simulate a Turing pattern spreading from the three previously known ridge initiation sites on the fingertip: the center of the finger pad, under the nail and at the joint’s crease nearest the fingertip.
By altering the relative timing, location and angle of these starting points, the team could create each of the three most common fingerprint patterns — arches, loops and whorls — and even rarer ones. Arches, for instance, can form when finger pad ridges get a slow start, allowing ridges originating from the crease and under the nail to occupy more space.

“It’s a very well-done study,” says developmental and stem cell biologist Sarah Millar, director of the Black Family Stem Cell Institute at the Icahn School of Medicine at Mount Sinai in New York City.

Controlled competition between molecules also determines hair follicle distribution, says Millar, who was not involved in the work. The new study, she says, “shows that the formation of fingerprints follows along some basic themes that have already been worked out for other types of patterns that we see in the skin.”

Millar notes that people with gene mutations that affect WNT and EDAR have skin abnormalities. “The idea that those molecules might be involved in fingerprint formation was floating around,” she says.

Overall, Headon says, the team aims to aid formation of skin structures, like sweat glands, when they’re not developing properly in the womb, and maybe even after birth.

“What we want to do, in broader terms, is understand how the skin matures.”

The deadly VEXAS syndrome is more common than doctors thought

A mysterious new disease may be to blame for severe, unexplained inflammation in older men. Now, researchers have their first good look at who the disease strikes, and how often.

VEXAS syndrome, an illness discovered just two years ago, affects nearly 1 in 4,000 men over 50 years old, scientists estimate January 24 in JAMA. The disease also occurs in older women, though less frequently. Altogether, more than 15,000 people in the United States may be suffering from the syndrome, says study coauthor David Beck, a clinical geneticist at NYU Langone Health in New York City. Those numbers indicate that physicians should be on the lookout for VEXAS, Beck says. “It’s underrecognized and underdiagnosed. A lot of physicians aren’t yet aware of it.”
Beck’s team reported discovering VEXAS syndrome in 2020, linking mutations in a gene called UBA1 to a suite of symptoms including fever, low blood cell count and inflammation. His team’s new study is the first to estimate how often VEXAS occurs in the general population — and the results are surprising. “It’s more prevalent than we suspected,” says Emma Groarke, a hematologist at the National Institutes of Health in Bethesda, Md., who was not involved with the study.
VEXAS tends to show up later in life ­­— after people somehow acquire UBA1 mutations in their blood cells. Patients may feel overwhelming fatigue, lethargy and have skin rashes, Beck says. “The disease is progressive, and it’s severe.” VEXAS can also be deadly. Once a person’s symptoms begin, the median survival time is about 10 years, his team has found.

Until late 2020, no one knew that there was a genetic thread connecting VEXAS syndrome’s otherwise unexplained symptoms. In fact, individuals may be diagnosed with other conditions, including polyarteritis nodosa, an inflammatory blood disease, and relapsing polychondritis, a connective tissue disorder, before being diagnosed with VEXAS.

To ballpark the number of VEXAS-affected individuals, Beck’s team combed through electronic health records of more than 160,000 people in Pennsylvania, in a collaboration with the NIH and Geisinger Health. In people over 50, the disease-causing UBA1 mutations showed up in roughly 1 in 4,000 men. Among women in that age bracket, about 1 in 26,000 had the mutations.

A genetic test of the blood can help doctors diagnose VEXAS, and treatments like steroids and other immunosuppressive drugs, which tamp down inflammation, can ease symptoms. Groarke and her NIH colleagues have also started a small phase II clinical trial testing bone marrow transplants as a way to swap patients’ diseased blood cells for healthy ones.

Beck says he hopes to raise awareness about the disease, though he recognizes that there’s much more work to do. In his team’s study, for instance, the vast majority of participants were white Pennsylvanians, so scientists don’t know how the disease affects other populations. Researchers also don’t know what spurs the blood cell mutations, nor how they spark an inflammatory frenzy in the body.

“The more patients that are diagnosed, the more we’ll learn about the disease,” Beck says. “This is just one step in the process of finding more effective therapies.”

Too much of this bacteria in the nose may worsen allergy symptoms

A type of bacteria that’s overabundant in the nasal passages of people with hay fever may worsen symptoms. Targeting that bacteria may provide a way to rein in ever-running noses.

Hay fever occurs when allergens, such as pollen or mold, trigger an inflammatory reaction in the nasal passages, leading to itchiness, sneezing and overflowing mucus. Researchers analyzed the composition of the microbial population in the noses of 55 people who have hay fever and those of 105 people who don’t. There was less diversity in the nasal microbiome of people who have hay fever and a whole lot more of a bacterial species called Streptococcus salivarius, the team reports online January 12 in Nature Microbiology.
S. salivarius was 17 times more abundant in the noses of allergy sufferers than the noses of those without allergies, says Michael Otto, a molecular microbiologist at the National Institute of Allergy and Infectious Diseases in Bethesda, Md. That imbalance appears to play a part in further provoking allergy symptoms. In laboratory experiments with allergen-exposed cells that line the airways, S. salivarius boosted the cells’ production of proteins that promote inflammation.

And it turns out that S. salivarius really likes runny noses. One prominent, unpleasant symptom of hay fever is the overproduction of nasal discharge. The researchers found that S. salivarius binds very well to airway-lining cells exposed to an allergen and slathered in mucus — better than a comparison bacteria that also resides in the nose.

The close contact appears to be what makes the difference. It means that substances on S. salivarius’ surface that can drive inflammation — common among many bacteria — are close enough to exert their effect on cells, Otto says.

Hay fever, which disrupts daily activities and disturbs sleep, is estimated to affect as many as 30 percent of adults in the United States. The new research opens the door “to future studies targeting this bacteria” as a potential treatment for hay fever, says Mahboobeh Mahdavinia, a physician scientist who studies immunology and allergies at Rush University Medical Center in Chicago.

But any treatment would need to avoid harming the “good” bacteria that live in the nose, says Mahdavinia, who was not involved in the research.

The proteins on S. salivarius’ surface that are important to its ability to attach to mucus-covered cells might provide a target, says Otto. The bacteria bind to proteins called mucins found in the slimy, runny mucus. By learning more about S. salivarius’ surface proteins, Otto says, it may be possible to come up with “specific methods to block that adhesion.”

Lots of Tatooine-like planets around binary stars may be habitable

SEATTLE — Luke Skywalker’s home planet in Star Wars is the stuff of science fiction. But Tatooine-like planets in orbit around pairs of stars might be our best bet in the search for habitable planets beyond our solar system.

Many stars in the universe come in pairs. And lots of those should have planets orbiting them (SN: 10/25/21). That means there could be many more planets orbiting around binaries than around solitary stars like ours. But until now, no one had a clear idea about whether those planets’ environments could be conducive to life. New computer simulations suggest that, in many cases, life could imitate art.
Earthlike planets orbiting some configurations of binary stars can stay in stable orbits for at least a billion years, researchers reported January 11 at the American Astronomical Society meeting. That sort of stability, the researchers propose, would be enough to potentially allow life to develop, provided the planets aren’t too hot or cold.

Of the planets that stuck around, about 15 percent stayed in their habitable zone — a temperate region around their stars where water could stay liquid — most or even all of the time.

The researchers ran simulations of 4,000 configurations of binary stars, each with an Earthlike planet in orbit around them. The team varied things like the relative masses of the stars, the sizes and shapes of the stars’ orbits around each other, and the size of the planet’s orbit around the binary pair.

The scientists then tracked the motion of the planets for up to a billion years of simulated time to see if the planets would stay in orbit over the sorts of timescales that might allow life to emerge.

A planet orbiting binary stars can get kicked out of the star system due to complicated interactions between the planet and stars. In the new study, the researchers found that, for planets with large orbits around star pairs, only about 1 out of 8 were kicked out of the system. The rest were stable enough to continue to orbit for the full billion years. About 1 in 10 settled in their habitable zones and stayed there.

Of the 4,000 planets that the team simulated, roughly 500 maintained stable orbits that kept them in their habitable zones at least 80 percent of the time.

“The habitable zone . . . as I’ve characterized it so far, spans from freezing to boiling,” said Michael Pedowitz, an undergraduate student at the College of New Jersey in Ewing who presented the research. Their definition is overly strict, he said, because they chose to model Earthlike planets without atmospheres or oceans. That’s simpler to simulate, but it also allows temperatures to fluctuate wildly on a planet as it orbits.
“An atmosphere and oceans would smooth over temperature variations fairly well,” says study coauthor Mariah MacDonald, an astrobiologist also at the College of New Jersey. An abundance of air and water would potentially allow a planet to maintain habitable conditions, even if it spent more of its time outside of the nominal habitable zone around a binary star system.

The number of potentially habitable planets “will increase once we add atmospheres,” MacDonald says, “but I can’t yet say by how much.”

She and Pedowitz hope to build more sophisticated models in the coming months, as well as extend their simulations beyond a billion years and include changes in the stars that can affect conditions in a solar system as it ages.

The possibility of stable and habitable planets in binary star systems is a timely issue says Penn State astrophysicist Jason Wright, who was not involved in the study.

“At the time Star Wars came out,” he says, “we didn’t know of any planets outside the solar system, and wouldn’t for 15 years. Now we know that there are many and that they orbit these binary stars.”

These simulations of planets orbiting binaries could serve as a guide for future experiments, Wright says. “This is an under-explored population of planets. There’s no reason we can’t go after them, and studies like this are presumably showing us that it’s worthwhile to try.”

Why pandemic fatigue and COVID-19 burnout took over in 2022

2022 was the year many people decided the coronavirus pandemic had ended.

President Joe Biden said as much in an interview with 60 Minutes in September. “The pandemic is over,” he said while strolling around the Detroit Auto Show. “We still have a problem with COVID. We’re still doing a lot of work on it. But the pandemic is over.”

His evidence? “No one’s wearing masks. Everybody seems to be in pretty good shape.”

But the week Biden’s remarks aired, about 360 people were still dying each day from COVID-19 in the United States. Globally, about 10,000 deaths were recorded every week. That’s “10,000 too many, when most of these deaths could be prevented,” the World Health Organization Director-General Tedros Adhanom Ghebreyesus said in a news briefing at the time. Then, of course, there are the millions who are still dealing with lingering symptoms long after an infection.
Those staggering numbers have stopped alarming people, maybe because those stats came on the heels of two years of mind-boggling death counts (SN Online: 5/18/22). Indifference to the mounting death toll may reflect pandemic fatigue that settled deep within the public psyche, leaving many feeling over and done with safety precautions.

“We didn’t warn people about fatigue,” says Theresa Chapple-McGruder, an epidemiologist in the Chicago area. “We didn’t warn people about the fact that pandemics can last long and that we still need people to be willing to care about yourselves, your neighbors, your community.”

Public health agencies around the world, including in Singapore and the United Kingdom, reinforced the idea that we could “return to normal” by learning to “live with COVID.” The U.S. Centers for Disease Control and Prevention’s guidelines raised the threshold for case counts that would trigger masking (SN Online: 3/3/22). The agency also shortened suggested isolation times for infected people to five days, even though most people still test positive for the virus and are potentially infectious to others for several days longer (SN Online: 8/19/22).

The shifting guidelines bred confusion and put the onus for deciding when to mask, test and stay home on individuals. In essence, the strategy shifted from public health — protecting your community — to individual health — protecting yourself.
Doing your part can be exhausting, says Eric Kennedy, a sociologist specializing in disaster management at York University in Toronto. “Public health is saying, ‘Hey, you have to make the right choices every single moment of your life.’ Of course, people are going to get tired with that.”

Doing the right thing — from getting vaccinated to wearing masks indoors — didn’t always feel like it paid off on a personal level. As good as the vaccines are at keeping people from becoming severely ill or dying of COVID-19, they were not as effective at protecting against infection. This year, many people who tried hard to make safe choices and had avoided COVID-19 got infected by wily omicron variants (SN Online: 4/22/22). People sometimes got reinfected — some more than once (SN: 7/16/22 & 7/30/22, p. 8).
Those infections may have contributed to a sense of futility. “Like, ‘I did my best. And even with all of that work, I still got it. So why should I try?’ ” says Kennedy, head of a Canadian project monitoring the sociological effects of the COVID-19 pandemic.

Getting vaccinated, masking and getting drugs or antibody treatments can reduce the severity of infection and may cut the chances of infecting others. “We should have been talking about this as a community health issue and not a personal health issue,” Chapple-McGruder says. “We also don’t talk about the fact that our uptake [of these tools] is nowhere near what we need” to avoid the hundreds of daily deaths.

A lack of data about how widely the coronavirus is still circulating makes it difficult to say whether the pandemic is ending. In the United States, the influx of home tests was “a blessing and a curse,” says Beth Blauer, data lead for the Johns Hopkins University Coronavirus Resource Center. The tests gave an instant readout that told people whether they were infected and should isolate. But because those results were rarely reported to public health officials, true numbers of cases became difficult to gauge, creating a big data gap (SN Online: 5/27/22).
The flow of COVID-19 data from many state and local agencies also slowed to a trickle. In October, even the CDC began reporting cases and deaths weekly instead of daily. Altogether, undercounting of the coronavirus’s reach became worse than ever.

“We’re being told, ‘it’s up to you now to decide what to do,’ ” Blauer says, “but the data is not in place to be able to inform real-time decision making.”

With COVID-19 fatigue so widespread, businesses, governments and other institutions have to find ways to step up and do their part, Kennedy says. For instance, requiring better ventilation and filtration in public buildings could clean up indoor air and reduce the chance of spreading many respiratory infections, along with COVID-19. That’s a behind-the-scenes intervention that individuals don’t have to waste mental energy worrying about, he says.

The bottom line: People may have stopped worrying about COVID-19, but the virus isn’t done with us yet. “We have spent two-and-a-half years in a long, dark tunnel, and we are just beginning to glimpse the light at the end of that tunnel. But it is still a long way off,” WHO’s Tedros said. “The tunnel is still dark, with many obstacles that could trip us up if we don’t take care.” If the virus makes a resurgence, will we see it coming and will we have the energy to combat it again?

‘The Dawn of Everything’ rewrites 40,000 years of human history

Concerns abound about what’s gone wrong in modern societies. Many scholars explain growing gaps between the haves and the have-nots as partly a by-product of living in dense, urban populations. The bigger the crowd, from this perspective, the more we need power brokers to run the show. Societies have scaled up for thousands of years, which has magnified the distance between the wealthy and those left wanting.

In The Dawn of Everything, anthropologist David Graeber and archaeologist David Wengrow challenge the assumption that bigger societies inevitably produce a range of inequalities. Using examples from past societies, the pair also rejects the popular idea that social evolution occurred in stages.

Such stages, according to conventional wisdom, began with humans living in small hunter-gatherer bands where everyone was on equal footing. Then an agricultural revolution about 12,000 years ago fueled population growth and the emergence of tribes, then chiefdoms and eventually bureaucratic states. Or perhaps murderous alpha males dominated ancient hunter-gatherer groups. If so, early states may have represented attempts to corral our selfish, violent natures.

Neither scenario makes sense to Graeber and Wengrow. Their research synthesis — which extends for 526 pages — paints a more hopeful picture of social life over the last 30,000 to 40,000 years. For most of that time, the authors argue, humans have tactically alternated between small and large social setups. Some social systems featured ruling elites, working stiffs and enslaved people. Others emphasized decentralized, collective decision making. Some were run by men, others by women. The big question — one the authors can’t yet answer — is why, after tens of thousands of years of social flexibility, many people today can’t conceive of how society might effectively be reorganized.
Hunter-gatherers have a long history of revamping social systems from one season to the next, the authors write. About a century ago, researchers observed that Indigenous populations in North America and elsewhere often operated in small, mobile groups for part of the year and crystallized into large, sedentary communities the rest of the year. For example, each winter, Canada’s Northwest Coast Kwakiutl hunter-gatherers built wooden structures where nobles ruled over designated commoners and enslaved people, and held banquets called potlatch. In summers, aristocratic courts disbanded, and clans with less formal social ranks fished along the coast.

Many Late Stone Age hunter-gatherers similarly assembled and dismantled social systems on a seasonal basis, evidence gathered over the last few decades suggests. Scattered discoveries of elaborate graves for apparently esteemed individuals (SN: 10/5/17) and huge structures made of stone (SN: 2/11/21), mammoth bones and other material dot Eurasian landscapes. The graves may hold individuals who were accorded special status, at least at times of the year when mobile groups formed large communities and built large structures, the authors speculate. Seasonal gatherings to conduct rituals and feasts probably occurred at the monumental sites. No signs of centralized power, such as palaces or storehouses, accompany those sites.

Social flexibility and experimentation, rather than a revolutionary shift, also characterized ancient transitions to agriculture, Graeber and Wengrow write. Middle Eastern village excavations now indicate that the domestication of cereals and other crops occurred in fits and starts from around 12,000 to 9,000 years ago. Ancient Fertile Crescent communities periodically gave farming a go while still hunting, foraging, fishing and trading. Early cultivators were in no rush to treat tracts of land as private property or to form political systems headed by kings, the authors conclude.

Even in early cities of Mesopotamia and Eurasia around 6,000 years ago (SN: 2/19/20), absolute rule by monarchs did not exist. Collective decisions were made by district councils and citizen assemblies, archaeological evidence suggests. In contrast, authoritarian, violent political systems appeared in the region’s mobile, nonagricultural populations at that time.

Early states formed in piecemeal fashion, the authors argue. These political systems incorporated one or more of three basic elements of domination: violent control of the masses by authorities, bureaucratic management of special knowledge and information, and public demonstrations of rulers’ power and charisma. Egypt’s early rulers more than 4,000 years ago fused violent coercion of their subjects with extensive bureaucratic controls over daily affairs. Classic Maya rulers in Central America 1,100 years ago or more relied on administrators to monitor cosmic events while grounding earthly power in violent control and alliances with other kings.

States can take many forms, though. Graeber and Wengrow point to Bronze Age Minoan society on Crete as an example of a political system run by priestesses who called on citizens to transcend individuality via ecstatic experiences that bound the population together.

What seems to have changed today is that basic social liberties have receded, the authors contend. The freedom to relocate to new kinds of communities, to disobey commands issued by others and to create new social systems or alternate between different ones has become a scarce commodity. Finding ways to reclaim that freedom is a major challenge.

These examples give just a taste of the geographic and historical ground covered by the authors. Shortly after finishing writing the book, Graeber, who died in 2020, tweeted: “My brain feels bruised with numb surprise.” That sense of revelation animates this provocative take on humankind’s social journey.

When James Webb launches, it will have a bigger to-do list than 1980s researchers suspected

he James Webb Space Telescope has been a long time coming. When it launches later this year, the observatory will be the largest and most complex telescope ever sent into orbit. Scientists have been drafting and redrafting their dreams and plans for this unique tool since 1989.

The mission was originally scheduled to launch between 2007 and 2011, but a series of budget and technical issues pushed its start date back more than a decade. Remarkably, the core design of the telescope hasn’t changed much. But the science that it can dig into has. In the years of waiting for Webb to be ready, big scientific questions have emerged. When Webb was an early glimmer in astronomers’ eyes, cosmological revolutions like the discoveries of dark energy and planets orbiting stars outside our solar system hadn’t yet happened.

“It’s been over 25 years,” says cosmologist Wendy Freedman of the University of Chicago. “But I think it was really worth the wait.”

An audacious plan
Webb has a distinctive design. Most space telescopes house a single lens or mirror within a tube that blocks sunlight from swamping the dim lights of the cosmos. But Webb’s massive 6.5-meter-wide mirror and its scientific instruments are exposed to the vacuum of space. A multilayered shield the size of a tennis court will block light from the sun, Earth and moon.

For the awkward shape to fit on a rocket, Webb will launch folded up, then unfurl itself in space (see below, What could go wrong?).

“They call this the origami satellite,” says astronomer Scott Friedman of the Space Telescope Science Institute, or STScI, in Baltimore. Friedman is in charge of Webb’s postlaunch choreography. “Webb is different from any other telescope that’s flown.”
Its basic design hasn’t changed in more than 25 years. The telescope was first proposed in September 1989 at a workshop held at STScI, which also runs the Hubble Space Telescope.

At the time, Hubble was less than a year from launching, and was expected to function for only 15 years. Thirty-one years after its launch, the telescope is still going strong, despite a series of computer glitches and gyroscope failures (SN Online: 10/10/18).

The institute director at the time, Riccardo Giacconi, was concerned that the next major mission would take longer than 15 years to get off the ground. So he and others proposed that NASA investigate a possible successor to Hubble: a space telescope with a 10-meter-wide primary mirror that was sensitive to light in infrared wavelengths to complement Hubble’s range of ultraviolet, visible and near-infrared.

Infrared light has a longer wavelength than light that is visible to human eyes. But it’s perfect for a telescope to look back in time. Because light travels at a fixed speed, looking at distant objects in the universe means seeing them as they looked in the past. The universe is expanding, so that light is stretched before it reaches our telescopes. For the most distant objects in the universe — the first galaxies to clump together, or the first stars to burn in those galaxies — light that was originally emitted in shorter wavelengths is stretched all the way to the infrared.

Giacconi and his collaborators dreamed of a telescope that would detect that stretched light from the earliest galaxies. When Hubble started sharing its views of the early universe, the dream solidified into a science plan. The galaxies Hubble saw at great distances “looked different from what people were expecting,” says astronomer Massimo Stiavelli, a leader of the James Webb Space Telescope project who has been at STScI since 1995. “People started thinking that there is interesting science here.”

In 1995, STScI and NASA commissioned a report to design Hubble’s successor. The report, led by astronomer Alan Dressler of the Carnegie Observatories in Pasadena, Calif., suggested an infrared space observatory with a 4-meter-wide mirror.

The bigger a telescope’s mirror, the more light it can collect, and the farther it can see. Four meters wasn’t that much larger than Hubble’s 2.4-meter-wide mirror, but anything bigger would be difficult to launch.

Dressler briefed then-NASA Administrator Dan Goldin in late 1995. In January 1996 at the American Astronomical Society’s annual meeting, Goldin challenged the scientists to be more ambitious. He called out Dressler by name, saying, “Why do you ask for such a modest thing? Why not go after six or seven meters?” (Still nowhere near Giacconi’s pie-in-the-sky 10-meter wish.) The speech received a standing ovation.

Six meters was a larger mirror than had ever flown in space, and larger than would fit in available launch vehicles. Scientists would have to design a telescope mirror that could fold, then deploy once it reached space.

The telescope would also need to cool itself passively by radiating heat into space. It needed a sun shield — a big one. The origami telescope was born. It was dubbed James Webb in 2002 for NASA’s administrator from 1961 to 1968, who fought to support research to boost understanding of the universe in the increasingly human-focused space program. (In response to a May petition to change the name, NASA investigated allegations that James Webb persecuted gay and lesbian people during his government career. The agency announced on September 27 that it found no evidence warranting a name change.)
Goldin’s motto at NASA was “Faster, better, cheaper.” Bigger was better for Webb, but it sure wasn’t faster — or cheaper. By late 2010, the project was more than $1.4 billion over its $5.1 billion budget (SN: 4/9/11, p. 22). And it was going to take another five years to be ready. Today, the cost is estimated at almost $10 billion.

The telescope survived a near-cancellation by Congress, and its timeline was reset for an October 2018 launch. But in 2017, the launch was pushed to June 2019. Two more delays in 2018 pushed the takeoff to May 2020, then to March 2021. Some of those delays were because assembling and testing the spacecraft took longer than NASA expected.

Other slowdowns were because of human errors, like using the wrong cleaning solvent, which damaged valves in the propulsion system. Recent shutdowns due to the coronavirus pandemic pushed the launch back a few more months.

“I don’t think we ever imagined it would be this long,” says University of Chicago’s Freedman, who worked on the Dressler report. But there’s one silver lining: Science marched on.

The age conflict
The first science goal listed in the Dressler report was “the detailed study of the birth and evolution of normal galaxies such as the Milky Way.” That is still the dream, partly because it’s such an ambitious goal, Stiavelli says.

“We wanted a science rationale that would resist the test of time,” he says. “We didn’t want to build a mission that would do something that gets done in some other way before you’re done.”

Webb will peek at galaxies and stars as they were just 400 million years after the Big Bang, which astronomers think is the epoch when the first tiny galaxies began making the universe transparent to light by stripping electrons from cosmic hydrogen.

But in the 1990s, astronomers had a problem: There didn’t seem to be enough time in the universe to make galaxies much earlier than the ones astronomers had already seen. The standard cosmology at the time suggested the universe was 8 billion or 9 billion years old, but there were stars in the Milky Way that seemed to be about 14 billion years old.

“There was this age conflict that reared its head,” Freedman says. “You can’t have a universe that’s younger than the oldest stars. The way people put it was, ‘You can’t be older than your grandmother!’”
In 1998, two teams of cosmologists showed that the universe is expanding at an ever-increasing rate. A mysterious substance dubbed dark energy may be pushing the universe to expand faster and faster. That accelerated expansion means the universe is older than astronomers previously thought — the current estimate is about 13.8 billion years old.

“That resolved the age conflict,” Freedman says. “The discovery of dark energy changed everything.” And it expanded Webb’s to-do list.

Dark energy
Top of the list is getting to the bottom of a mismatch in cosmic measurements. Since at least 2014, different methods for measuring the universe’s rate of expansion — called the Hubble constant — have been giving different answers. Freedman calls the issue “the most important problem in cosmology today.”

The question, Freedman says, is whether the mismatch is real. A real mismatch could indicate something profound about the nature of dark energy and the history of the universe. But the discrepancy could just be due to measurement errors.

Webb can help settle the debate. One common way to determine the Hubble constant is by measuring the distances and speeds of far-off galaxies. Measuring cosmic distances is difficult, but astronomers can estimate them using objects of known brightness, called standard candles. If you know the object’s actual brightness, you can calculate its distance based on how bright it seems from Earth.

Studies using supernovas and variable stars called Cepheids as candles have found an expansion rate of 74.0 kilometers per second for approximately every 3 million light-years, or megaparsec, of distance between objects. But using red giant stars, Freedman and colleagues have gotten a smaller answer: 69.8 km/s/Mpc.

Other studies have measured the Hubble constant by looking at the dim glow of light emitted just 380,000 years after the Big Bang, called the cosmic microwave background. Calculations based on that glow give a smaller rate still: 67.4 km/s/Mpc. Although these numbers may seem close, the fact that they disagree at all could alter our understanding of the contents of the universe and how it evolves over time. The discrepancy has been called a crisis in cosmology (SN: 9/14/19, p. 22).

In its first year, Webb will observe some of the same galaxies used in the supernova studies, using three different objects as candles: Cepheids, red giants and peculiar stars called carbon stars.

The telescope will also try to measure the Hubble constant using a distant gravitationally lensed galaxy. Comparing those measurements with each other and with similar ones from Hubble will show if earlier measurements were just wrong, or if the tension between measurements is real, Freedman says.

Without these new observations, “we were just going to argue about the same things forever,” she says. “We just need better data. And [Webb] is poised to deliver it.”
Exoplanets
Perhaps the biggest change for Webb science has been the rise of the field of exoplanet explorations.

“When this was proposed, exoplanets were scarcely a thing,” says STScI’s Friedman. “And now, of course, it’s one of the hottest topics in all of science, especially all of astronomy.”

The Dressler report’s second major goal for Hubble’s successor was “the detection of Earthlike planets around other stars and the search for evidence of life on them.” But back in 1995, only a handful of planets orbiting other sunlike stars were even known, and all of them were scorching-hot gas giants — nothing like Earth at all.

Since then, astronomers have discovered thousands of exoplanets orbiting distant stars. Scientists now estimate that, on average, there is at least one planet for every star we see in the sky. And some of the planets are small and rocky, with the right temperatures to support liquid water, and maybe life.

Most of the known planets were discovered as they crossed, or transited, in front of their parent stars, blocking a little bit of the parent star’s light. Astronomers soon realized that, if those planets have atmospheres, a sensitive telescope could effectively sniff the air by examining the starlight that filters through the atmosphere.

The infrared Spitzer Space Telescope, which launched in 2003, and Hubble have started this work. But Spitzer ran out of coolant in 2009, keeping it too warm to measure important molecules in exoplanet atmospheres. And Hubble is not sensitive to some of the most interesting wavelengths of light — the ones that could reveal alien life-forms.

That’s where Webb is going to shine. If Hubble is peeking through a crack in a door, Webb will throw the door wide open, says exoplanet scientist Nikole Lewis of Cornell University. Crucially, Webb, unlike Hubble, will be particularly sensitive to several carbon-bearing molecules in exoplanet atmospheres that might be signs of life.

“Hubble can’t tell us anything really about carbon, carbon monoxide, carbon dioxide, methane,” she says.

If Webb had launched in 2007, it could have missed this whole field. Even though the first transiting exoplanet was discovered in 1999, their numbers were low for the next decade.

Lewis remembers thinking, when she started grad school in 2007, that she could make a computer model of all the transiting exoplanets. “Because there were literally only 25,” she says.
Between 2009 and 2018, NASA’s Kepler space telescope raked in transiting planets by the thousands. But those planets were too dim and distant for Webb to probe their atmospheres.

So the down-to-the-wire delays of the last few years have actually been good for exoplanet research, Lewis says. “The launch delays were one of the best things that’s happened for exoplanet science with Webb,” she says. “Full stop.”

That’s mainly thanks to NASA’s Transiting Exoplanet Survey Satellite, or TESS, which launched in April 2018. TESS’ job is to find planets orbiting the brightest, nearest stars, which will give Webb the best shot at detecting interesting molecules in planetary atmospheres.

If it had launched in 2018, Webb would have had to wait a few years for TESS to pick out the best targets. Now, it can get started on those worlds right away. Webb’s first year of observations will include probing several known exoplanets that have been hailed as possible places to find life. Scientists will survey planets orbiting small, cool stars called M dwarfs to make sure such planets even have atmospheres, a question that has been hotly debated.

If a sign of life does show up on any of these planets, that result will be fiercely debated, too, Lewis says. “There will be a huge kerfuffle in the literature when that comes up.” It will be hard to compare planets orbiting M dwarfs with Earth, because these planets and their stars are so different from ours. Still, “let’s look and see what we find,” she says.

A limited lifetime
With its components assembled, tested and folded at Northrop Grumman’s facilities in California, Webb is on its way by boat through the Panama Canal, ready to launch in an Ariane 5 rocket from French Guiana. The most recent launch date is set for December 18.

For the scientists who have been working on Webb for decades, this is a nostalgic moment.

“You start to relate to the folks who built the pyramids,” Stiavelli says.

Other scientists, who grew up in a world where Webb was always on the horizon, are already thinking about the next big thing.

“I’m pretty sure, barring epic disaster, that [Webb] will carry my career through the next decade,” Lewis says. “But I have to think about what I’ll do in the next decade” after that.

Unlike Hubble, which has lasted decades thanks to fixes by astronauts and upgrade missions, Webb has a strictly limited lifetime. Orbiting the sun at a gravitationally fixed point called L2, Webb will be too far from Earth to repair, and will need to burn small amounts of fuel to stay in position. The fuel will last for at least five years, and hopefully as much as 10. But when the fuel runs out, Webb is finished. The telescope operators will move it into retirement in an out-of-the-way orbit around the sun, and bid it farewell.

Space rocks may have bounced off baby Earth, but slammed into Venus

Squabbling sibling planets may have hurled space rocks when they were young.

Simulations suggest that space rocks the size of baby planets struck both the newborn Earth and Venus, but many of the rocks that only grazed Earth went on to hit — and stick — to Venus. That difference in early impacts could help explain why Earth and Venus are such different worlds today, researchers report September 23 in the Planetary Science Journal.

“The pronounced differences between Earth and Venus, in spite of their similar orbits and masses, has been one of the biggest puzzles in our solar system,” says planetary scientist Shigeru Ida of the Tokyo Institute of Technology, who was not involved in the new work. This study introduces “a new point that has not been raised before.”

Scientists have typically thought that there are two ways that collisions between baby planets can go. The objects could graze each other and each continue on its way, in a hit-and-run collision. Or two protoplanets could stick together, or accrete, making one larger planet. Planetary scientists often assume that every hit-and-run collision eventually leads to accretion. Objects that collide must have orbits that cross each other’s, so they’re bound to collide again and again, and eventually should stick.
But previous work from planetary scientist Erik Asphaug of the University of Arizona in Tucson and others suggests that isn’t so. It takes special conditions for two planets to merge, Asphaug says, like relatively slow impact speeds, so hit-and-runs were probably much more common in the young solar system.

Asphaug and colleagues wondered what that might have meant for Earth and Venus, two apparently similar planets with vastly different climates. Both worlds are about the same size and mass, but Earth is wet and clement while Venus is a searing, acidic hellscape (SN: 2/13/18).

“If they started out on similar pathways, somehow Venus took a wrong turn,” Asphaug says.

The team ran about 4,000 computer simulations in which Mars-sized protoplanets crashed into a young Earth or Venus, assuming the two planets were at their current distances from the sun. The researchers found that about half of the time, incoming protoplanets grazed Earth without directly colliding. Of those, about half went on to collide with Venus.

Unlike Earth, Venus ended up accreting most of the objects that hit it in the simulations. Hitting Earth first slowed incoming objects down enough to let them stick to Venus later, the study suggests. “You have this imbalance where things that hit the Earth, but don’t stick, tend to end up on Venus,” Asphaug says. “We have a fundamental explanation for why Venus ended up accreting differently from the Earth.”

If that’s really what happened, it would have had a significant effect on the composition of the two worlds. Earth would have ended up with more of the outer mantle and crust material from the incoming protoplanets, while Venus would have gotten more of their iron-rich cores.

The imbalance in impacts could even explain some major Venusian mysteries, like why the planet doesn’t have a moon, why it spins so slowly and why it lacks a magnetic field — though “these are hand-waving kind of conjectures,” Asphaug says.

Ida says he hopes that future work will look into those questions more deeply. “I’m looking forward to follow-up studies to examine if the new result actually explains the Earth-Venus difference,” he says.

The idea fits into a growing debate among planetary scientists about how the solar system grew up, says planetary scientist Seth Jacobson of Michigan State University in East Lansing. Was it built violently, with lots of giant collisions, or calmly, with planets growing smoothly via pebbles sticking together?

“This paper falls on the end of lots of giant impacts,” Jacobson says.

Each rocky planet in the solar system should have very different chemistry and structure depending on which scenario is true. But scientists know the chemistry and structure of only one planet with any confidence: Earth. And Earth’s early history has been overwritten by plate tectonics and other geologic activity. “Venus is the missing link,” Jacobson says. “Learning more about Venus’ chemistry and interior structure is going to tell us more about whether it had a giant impact or not.”

Three missions to Venus are expected to launch in the late 2020s and 2030s (SN: 6/2/21). Those should help, but none are expected to take the kind of detailed composition measurements that could definitively solve the mystery. That would take a long-lived lander, or a sample return mission, both of which would be extremely difficult on hot, hostile Venus.

“I wish there was an easier way to test it,” Jacobson says. “I think that’s where we should concentrate our energy as terrestrial planet formation scientists going forward.”

Satellite swarms may outshine the night sky’s natural constellations

Fleets of private satellites orbiting Earth will be visible to the naked eye in the next few years, sometimes all night long.

Companies like SpaceX and Amazon have launched hundreds of satellites into low orbits since 2019, with plans to launch thousands more in the works — a trend that’s alarming astronomers. The goal of these satellite “mega-constellations” is to bring high-speed internet around the globe, but these bright objects threaten to disrupt astronomers’ ability to observe the cosmos (SN: 3/12/20). “For astronomers, this is kind of a pants-on-fire situation,” says radio astronomer Harvey Liszt of the National Radio Astronomical Observatory in Charlottesville, Va.

Now, a new simulation of the potential positions and brightness of these satellites shows that, contrary to earlier predictions, casual sky watchers will have their view disrupted, too. And parts of the world will be affected more than others, astronomer Samantha Lawler of the University of Regina in Canada and her colleagues report in a paper posted September 9 at arXiv.org.

“How will this affect the way the sky looks to your eyeballs?” Lawler asks. “We humans have been looking up at the night sky and analyzing patterns there for as long as we’ve been human. It’s part of what makes us human.” These mega-constellations could mean “we’ll see a human-made pattern more than we can see the stars, for the first time in human history.”
Flat, smooth surfaces on satellites can reflect sunlight depending on their position in the sky. Earlier research had suggested that most of the new satellites would not be visible with the naked eye.

Lawler, along with Aaron Boley of the University of British Columbia and Hanno Rein of the University of Toronto at Scarborough in Canada, started building their simulation with public data about the launch plans of four companies — SpaceX’s Starlink, Amazon’s Kuiper, OneWeb and StarNet/GW — that had been filed with the U.S. Federal Communications Commission and the International Telecommunications Union. The filings detailed the expected orbital heights and angles of 65,000 satellites that could be launched over the next few years.

“It’s impossible to predict the future, but this is realistic,” says astronomer Meredith Rawls of the University of Washington in Seattle, who was not involved in the new study. “A lot of times when people make these simulations, they pick a number out of a hat. This really justifies the numbers that they pick.”

There are currently about 7,890 objects in Earth orbit, about half of which are operational satellites, according to the U.N. Office for Outer Space Affairs. But that number is increasing fast as companies launch more and more satellites (SN: 12/28/20). In August 2020, there were only about 2,890 operational satellites.

Next, the researchers computed how many satellites will be in the sky at different times of year, at different hours of the night and from different positions on Earth’s surface. They also estimated how bright the satellites were likely to be at different hours of the day and times of the year.

That calculation required a lot of assumptions because companies aren’t required to publish details about their satellites like the materials they’re made of or their precise shapes, both of which can affect reflectivity. But there are enough satellites in orbit that Lawler and colleagues could compare their simulated satellites to the light reflected down to Earth by the real ones.

The simulations showed that “the way the night sky is going to change will not affect all places equally,” Lawler says. The places where naked-eye stargazing will be most affected are at latitudes 50° N and 50° S, regions that cross lower Canada, much of Europe, Kazakhstan and Mongolia, and the southern tips of Chile and Argentina, the researchers found.
“The geometry of sunlight in the summer means there will be hundreds of visible satellites all night long,” Lawler says. “It’s bad everywhere, but it’s worse there.” For her, this is personal: She lives at 50° N.

Closer to the equator, where many research observatories are located, there is a period of about three hours in the winter and near the time of the spring and fall equinoxes with few or no sunlit satellites visible. But there are still hundreds of sunlit satellites all night at these locations in the summer.

A few visible satellites can be a fun spectacle, Lawler concedes. “I think we really are at a transition point here where right now, seeing a satellite, or even a Starlink train, is cool and different and wow, that’s amazing,” she says. “I used to look up when the [International Space Station] was overhead.” But she compares the coming change to watching one car go down the road 100 years ago, versus living next to a busy freeway now.

“Every sixteenth star will actually be moving,” she says. “I hope I’m wrong. I’ve never wanted to be wrong about a simulation more than this. But without mitigation, this is what the sky will look like in a few years.”

Astronomers have been meeting with representatives from private companies, as well as space lawyers and government officials, to work out compromises and mitigation strategies. Companies have been testing ways to reduce reflectivity, like shading the satellites with a “visor.” Other proposed strategies include limiting the satellites to lower orbits, where they move faster across the sky and leave a fainter streak in telescope images. Counterintuitively, lower satellites may be better for some astronomy research, Rawls says. “They move out of the way quick.”

But that lower altitude strategy will mean more visible satellites for other parts of the world, and more that are visible to the naked eye. “There’s not some magical orbital altitude that solves all our problems,” Rawls says. “There are some latitudes on Earth where no matter what altitude you put your satellites at, they’re going to be all over the darn place. The only way out of this is fewer satellites.”

There are currently no regulations concerning how bright a satellite can be or how many satellites a private company can launch. Scientists are grateful that companies are willing to work with them, but nervous that their cooperation is voluntary.

“A lot of the people who work on satellites care about space. They’re in this industry because they think space is awesome,” Rawls says. “We share that, which helps. But it doesn’t fix it. I think we need to get some kind of regulation as soon as possible.” (Representatives from Starlink, Kuiper and OneWeb did not respond to requests for comment.)

Efforts are under way to bring the issue to the attention of the United Nations and to try to use existing environmental regulations to place limits on satellite launches, says study coauthor Boley (who also lives near 50° N).

Analogies to other global pollution problems, like space junk, can provide inspiration and precedents, he says. “There are a number of ways forward. We shouldn’t just lose hope. We can do things about this.”

A supernova’s delayed reappearance could pin down how fast the universe expands

A meandering trek taken by light from a remote supernova in the constellation Cetus may help researchers pin down how fast the universe expands — in another couple of decades.

About 10 billion years ago, a star exploded in a far-off galaxy named MRG-M0138. Some of the light from that explosion later encountered a gravitational lens, a cluster of galaxies whose gravity bent the light so that we see multiple images. In 2016, the supernova appeared in Earth’s sky as three distinct points of light, each marking three different paths the light took to get here.

Now, researchers predict that the supernova will appear again in the late 2030s. The time delay — the longest ever seen from a gravitationally lensed supernova — could provide a more precise estimate for the distance to the supernova’s host galaxy, the team reports September 13 in Nature Astronomy. And that, in turn, may let astronomers refine estimates of the Hubble constant, the parameter that describes how fast the universe expands.

The original three points of light appeared in images from the Hubble Space Telescope. “It was purely an accident,” says astronomer Steve Rodney of the University of South Carolina in Columbia. Three years later, when Hubble reobserved the galaxy, astronomer Gabriel Brammer at the University of Copenhagen discovered that all three points of light had vanished, indicating a supernova.
By calculating how the intervening cluster’s gravity alters the path the supernova’s light rays take, Rodney and his colleagues predict that the supernova will appear again in 2037, give or take a couple of years. Around that time, Hubble may burn up in the atmosphere, so Rodney’s team dubs the supernova “SN Requiem.”

“It’s a requiem for a dying star and a sort of elegy to the Hubble Space Telescope itself,” Rodney says. A fifth point of light, too faint to be seen, may also arrive around 2042, the team calculates.、
The predicted 21-year time delay — from 2016 to 2037 — is a record for a supernova. In contrast, the first gravitational lens ever found — twin images of a quasar spotted in 1979 — has a time delay of only 1.1 years (SN: 11/10/1979).

Not everyone agrees with Rodney’s forecast. “It is very difficult to predict what the time delay will be,” says Rudolph Schild, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., who was the first to measure the double quasar’s time delay. The distribution of dark matter in the galaxy hosting the supernova and the cluster splitting the supernova’s light is so uncertain, Schild says, that the next image of SN Requiem could come outside the years Rodney’s team has specified.

In any case, when the supernova image does appear, “that would be a phenomenally precise measurement” of the time delay, says Patrick Kelly, an astronomer at the University of Minnesota in Minneapolis who was not involved with the new work. That’s because the uncertainty in the time delay will be tiny compared with the tremendous length of the time delay itself.

That delay, coupled with an accurate description of how light rays weave through the galaxy cluster, could affect the debate over the Hubble constant. Numerically, the Hubble constant is the speed a distant galaxy recedes from us divided by the distance to that galaxy. For a given galaxy with a known speed, a larger estimated distance therefore leads to a lower number for the Hubble constant.

This number was once in dispute by a factor of two. Today the range is much tighter, from 67 to 73 kilometers per second per megaparsec. But that spread still leaves the universe’s age uncertain. The frequently quoted age of 13.8 billion years corresponds to a Hubble constant of 67.4. But if the Hubble constant is higher, then the universe could be about a billion years younger.

The longer it takes for SN Requiem to reappear, the farther from Earth the host galaxy is — which means a lower Hubble constant and an older universe. So if the debate over the Hubble constant persists into the 2030s, the exact date the supernova springs back to life could help resolve the dispute and nail down a fundamental cosmological parameter.