Over the past decade, two very different ways of calculating the rate at which the universe is expanding have come to be at odds, a disagreement dubbed the Hubble tension, after 20th-century astronomer Edwin Hubble. Experts have speculated that this dispute might be temporary, stemming from subtle shortcomings in observations or analyses that will eventually be corrected rather than from some flawed understanding of the physics of the cosmos. Now, however, a new study that relies on an independent measure of the properties of galaxies has strengthened the case for the tension. Quite possibly, it’s here to stay.
For some researchers, the word “tension” fails to convey the problem’s increasing severity.
“We’ve been at this ‘Hubble tension’ level for a long time. At some point the community needs to say, ‘This is more serious,’” says physicist Dan Scolnic of Duke University, who was not associated with the new study. “And the step up from ‘tension’ is ‘crisis.’”
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Worsening these woes are the latest results based on observations of the large-scale structure of the universe: dark energy, which is thought to be causing the expansion of the universe to accelerate, may be changing with time. This only serves to aggravate the Hubble tension—or Hubble crisis, if you prefer.
The tension’s roots lie in the two differing values calculated for the Hubble constant, or H0—the expansion rate of today’s universe. One comes from measurements of the cosmic microwave background (CMB), the leftover radiation from when the universe was about 380,000 years old.
The European Space Agency’s Planck satellite mapped the CMB from 2009 to 2013, and cosmologists used that map to nail down the standard model of cosmology, also called LCDM. (L is for lambda, representing dark energy; CDM is for a hypothetical, slow-moving “cold” form of dark matter strongly supported by observations.) In LCDM, dark energy makes up 68 percent of the universe, dark matter 27 percent and normal matter the rest. The Planck team then used features in the CMB to calculate the expansion rate of the early universe; extrapolating that to present times using LCDM, the researchers arrived at an H0 of about 67.5 kilometers per second per megaparsec. (One megaparsec equates to about 3.26 million light-years.)
Last month the Atacama Cosmology Telescope (ACT) collaboration, which created a more precise map of the CMB using a ground-based radio telescope in the Chilean Andes, released its latest findings. By combining the CMB measurements with the observed clustering of galaxies and measurements of the ages of stars and other aspects of the universe, the team got a value of about 68.22 km/s/Mpc for H0. While slightly higher than the Planck estimate, it’s “very consistent with it,” says astrophysicist and ACT team member David Spergel of Princeton University and the Simons Foundation.
The other, more direct way of calculating H0 involves using the so-called cosmic distance ladder to make measurements in our local neighborhood rather than at the outer limits of the observable universe.
Climbing the ladder is a laborious process that befits its name. Astronomers step onto the first rung using geometric measurements of distances to nearby stars called Cepheid variables. These stars are “standard candles” that vary in brightness with a periodicity that’s correlated with their absolute luminosity. The distance and periodicity measurements are used to calibrate the intrinsic characteristics of Cepheids.
The next rung of the ladder involves finding distant Cepheids and comparing their intrinsic luminosity (obtained using their periodicity) to their observed luminosity to estimate distances to their host galaxies. Astronomers then determine the velocities at which these galaxies are receding by looking at how much the universe’s expansion has stretched—or “redshifted”—their light toward the red part of the electromagnetic spectrum. Gauge the distances and velocities for a statistically significant sample of galaxies and you’ve arrived at an observed value of H0.
But Cepheids can only take you so far.
So astronomers also look for extremely bright exploding stars called type IA supernovae in galaxies that contain Cepheids. Such supernovae also function as standard candles whose absolute luminosity is correlated with their evanescent, varying brightness; the Cepheids, whose distances can be calculated, are used to calibrate the absolute luminosity of the supernovae. Astronomers then find type IA supernovae in other faraway galaxies to estimate their distances. The Supernovae, H0, for the Equation of State of Dark Energy (SH0ES) project, led by Nobel Laureate Adam Riess of Johns Hopkins University, has used such techniques to come up with an H0 value of about 73.5 km/s/Mpc.
Using supernovae as standard candles comes with inherent difficulties, however, says astronomer Brent Tully of the University of Hawaii. For one, multiple ground-based telescopes might be used to observe the same supernova, which introduces an element of instrumental uncertainty. Also, “we still don’t know really how supernovae explode,” he says. “There are probably variations [relevant to] its use as a standard candle—and people are aware of this.”
So, to reach even farther-flung galaxies, Tully and his colleagues opted to scale a different cosmic distance ladder that eschews supernovae. It involves starting with yet another standard candle: the tip of the red-giant-branch (TRGB) star. Such stars, with masses ranging from a large fraction of our sun’s to a few times that, are at the very end of their life and have grown ruddy and swollen—thus the “red giant” name. More specifically, they have burned off almost all of their hydrogen, leaving behind a helium core. When the core crosses a precise mass threshold, the helium ignites, giving such stars the same intrinsic luminosity. To accurately calibrate the absolute brightness of such stars, astronomers needed an accurate estimate of the distance to them without using Cepheids. That’s where a galaxy called NGC 4258 became important.
NGC 4258 hosts water-rich clouds called megamasers. (A maser is the microwave equivalent of a laser; “mega” refers to their copious, coherent emission of microwaves, which makes them appear conspicuously bright even across enormous cosmic distances.) Other teams had already measured the velocity of these clouds as they orbit the galaxy’s central supermassive black hole and worked out the geometric distance to NGC 4258. Tully and colleagues used this distance and observations made by the James Webb Space Telescope (JWST) to calibrate the absolute brightness of TRGB stars in NGC 4258. Armed with this information, they then used the JWST to observe and calculate the distances to 14 other galaxies that host TRGB stars.
These galaxies, however, are still relatively nearby, and their velocities are dominated not by the universe’s expansion but by the push and pull of other galaxies in their host clusters. “To measure the Hubble constant, we have to measure distances to galaxies that are several 100 million light-years away, far enough that the influences of gravitational interactions between different galaxies doesn’t get in the way of our measurement,” says team member Gagandeep Anand of the Space Telescope Science Institute.
This meant climbing still another rung of this new, supernovae-free distance ladder. The team used the previously derived TRGB distances to discern a property of aging galaxies full of TRGB stars known as surface brightness fluctuations (SBF). Because SBF is a statistical property that relies on measurements of ensembles of stars rather than individual ones (which are much harder to distinguish from further away), it’s well suited for deeper gazes into the cosmos. Anchoring measures of SBF to the TRGB technique allowed Tully and his colleagues to extract distances for galaxies from SBF observations previously made by the Hubble Space Telescope, out to a distance of about 100 megaparsecs. Finally, using those distances to calculate H0, they got a value of about 73.8 km/s/Mpc. The researchers posted their results to the preprint server arXiv.org in February.
“It’s pretty clear there is a very strong tension” between the local estimates of H0 and the CMB-and-LCDM route’s estimates, Riess says.
LCDM assumes that dark energy manifests in the form of the so-called cosmological constant, a sort of repulsive counterforce to gravity for which the energy density would not change over time. And the ACT team’s CMB-based results suggest that LCDM is on very firm footing. “Using the ACT data, we have tested many of the models that have been proposed that could make the Hubble constant larger by changing the physics,” Spergel says. “We constrain all of them and find no evidence for new physics or a higher Hubble constant.”
This contrasts with the latest result from the Dark Energy Spectroscopic Instrument (DESI) team, which collected data about the motion of about 15 million galaxies and combined this with other data to reconstruct the universe’s expansion history. The DESI result suggests that dark energy has a density that evolves with time, which may be evidence for important new physics beyond the confines of LCDM. Also, the DESI analysis shows that allowing dark energy to vary over time—as may be required to explain the team’s data—ends up increasing the Hubble tension rather than easing it. This means physicists must get back to the drawing board, Riess says. “With the DESI results, I imagine many folks will be looking for an idea that can explain both late-time evolution in dark energy and the Hubble tension,” he says.
Scolnic thinks that these odd results—first the renewed Hubble tension, nay, crisis and now the worry about dark energy’s true nature—are powerful hints that something is missing from our best models of the cosmos. “When there’s one thing, you could kind of rule it out as people making a mistake,” he says. “When there’s a second thing, you’re like, ‘Okay, maybe something weird is going on.’”