You Can’t Trust ‘Climate Economics’

Governments, banks and other institutions have based policies on models unconnected to reality.


WSJ, Roger Pielke Jr., 29.4.2026

image

The scientific journal Nature in December retracted one of the most influential climate economics papers of the past decade. The paper, by Maximilian Kotz, Anders Levermann and Leonie Wenz, claimed that unmitigated climate change would cost the global economy $38 trillion a year (in 2005 international dollars) by midcentury. It was the second-most-mentioned climate paper by the media in 2024, according to Carbon Brief. The paper was cited by central banks and governments to justify aggressive climate policies.

Then it collapsed. The authors acknowledged that its errors were “too substantial” for a correction. Nature retracted the paper more than 18 months after first learning of its problems.

Most media coverage treated this as an unfortunate aberration in what is otherwise settled science. The retraction, however, isn’t a one-off. It revealed a crack that runs much deeper into the foundation of climate research.

Economists Finbar Curtin and Matthew Burgess at the University of Wyoming released a preprint on April 20 that points out the broader flaws with current climate change research, making the Kotz et al. retraction look like small potatoes. Their paper, “The Empirically Inscrutable Climate-Economy Relationship,” starts from the most basic question in climate economics: Can researchers actually measure how climate affects the economy from the historical record?

Their answer is no. That matters enormously, because over the past decade, the field of climate economics has generated some of the most consequential numbers in global finance and governance. Central banks around the world have restructured their risk frameworks around these findings. The Network for Greening the Financial System—a coalition of more than 130 central banks and supervisors, including the European Central Bank and the Bank of England—built its climate scenario guidance on climate economics research. Federal agencies in the U.S., especially under the Obama and Biden presidencies, estimate the “social cost of carbon” when assessing the costs and benefits of proposed environmental policies. This framework has shaped regulations governing appliance standards, pipeline permitting and vehicle emissions. Financial-disclosure frameworks at the Securities and Exchange Commission, and parallel regimes across the European Union and U.K., treated these damage projections as credible scientific findings deserving regulatory weight.

Messrs. Curtin and Burgess show that the method underlying this subfield of economics can’t do what researchers claim it can. The problem, they argue, is that the statistical procedure strips out nearly everything that would allow researchers to identify a climate signal, then mistakes the residual noise for that signal. Lumping together countries with similar average temperatures but entirely different institutions, histories and natural resources and then calculating a single damage relationship for all of them doesn’t work; it describes the average but fails to describe a single real place on earth accurately. Such studies use sophisticated math to generate numbers—but these numbers don’t describe anything real.

Messrs. Curtin and Burgess implicate an entire influential field of literature projecting future climate damages. They argue that there’s no way out of this methodological predicament; the future effects of climate change are irreducibly uncertain, and could be small or large. Climate economist Noah Kaufman, who worked under Presidents Obama and Biden in the White House, tweeted that the “implication of this paper is that a lot of policy guidance from climate economists over the last 30 years was built on sand.”

The problem runs upstream too. For more than a decade, researchers built many of their climate projections on the back of a hypothetical standardized scenario called Representative Concentration Pathway 8.5—a vision of the future which required coal consumption to quintuple by 2100 based on assumptions about future energy use. Those assumptions have already diverged sharply from actual energy trends, and we know today that the scenario is implausibly extreme. That conclusion isn’t fringe or even controversial. Yet many scientists continue to emphasize RCP8.5 in climate research, with new studies published daily. The outdated scenario likely persists because of the slow schedule for updating scenario assumptions, the incentive researchers face to publish headline-grabbing results, and a climate advocacy ecosystem built on apocalyptic warnings.

Thousands of studies use it. Projections of flood damage, heat mortality, agricultural disruption and wildfire risk have rested on an implausible baseline that describes an imaginary, modeled future. Governments and financial institutions have treated these projections as the accurate scientific picture of the climate future.

Isn’t science self-correcting? Well, it’s supposed to be, but the reality is more complicated.

In a 2025 paper in the Journal of Applied Meteorology and Climatology, I documented one of the clearest examples of self-correction failure in climate research that I’ve encountered in nearly three decades of research.

An insurance company modified a hurricane loss data set by starting from my team’s carefully collected data. Many of those modifications have no documentation and no basis in research. The company appended data taken from a different tabulation of losses that were oranges to our apples. It posted the flawed “data set” online where researchers found it and, remarkably, used it as the basis for writing papers published in the peer-reviewed literature with conclusions that went in the opposite direction of the vast majority of peer-reviewed literature on trends in hurricane losses. It wasn’t science that led to their conclusions; it was bad data.

The Intergovernmental Panel on Climate Change and the U.S. National Climate Assessment featured prominently in their assessment reports one of these papers even after peer-reviewed research pointed out the flawed data sets.

When I notified PNAS—which published one of the papers relying on the data set—of my concerns about the paper, the scientific journal stood behind the paper. The papers that have used the corrupted data set remain in the literature today. Self-correction failed.

There’s no legitimate scientific ambiguity in this case. Either a data set reflects the data it claims to represent or it doesn’t. I documented exactly where it didn’t and published that finding in a peer-reviewed journal. Apparently, no one cared. If the scientific community can’t act on obviously false data—when the problems are carefully documented in the peer-reviewed literature—the prospects for soon correcting course on tens of thousands of flawed studies don’t look promising.

None of this means that climate change isn’t real. Human activity warms the planet. The uncertain risks merit serious discussion and responses. But so-called settled science that is built on flawed data and shielded from correction fails both policymakers and the public. By defending flawed data, scientific institutions erode the public trust they need to solve the world’s most challenging problems.

The cracks in the foundation of climate research with important policy implications are now too big to ignore. It is time for a course correction.

Mr. Pielke is a senior fellow at the American Enterprise Institute and author of the Honest Broker substack.