How Climate Science Lost Its Way on Scenarios
A summary of our 2021 deep dive on what went wrong
THB, Roger Pielke Jr., 13.5.2026
Suppose you ask a meteorologist for a weather forecast to help with planning a picnic for tomorrow. She gives you a detailed, technically sophisticated weather forecast for Mars.
Every equation underlying the forecast is correct. The modeling is state of the art. But the forecast is irrelevant to your picnic planning, because the forecast describes the weather on the wrong planet.
For well over a decade, a large portion of climate research and the use of that research has had a real-world version of this problem. The scenarios driving climate projections — the foundational assumptions about our collective future — described a world so removed from plausible reality that the projections built on them tell us more about a hypothetical future than about the one we are actually navigating.
The news that the most extreme climate scenarios have now been officially put out to pasture has now begun to spread far and wide. The scenarios — specifically, RCP8.5, SSP5-8.5, and SSP3-7.0. — were quietly retired last month by the international committee responsible for developing a new basket of official scenarios. It cannot be overstated how significant this change is — the now-obsolete extreme scenarios underpin the work of the Intergovernmental Panel on Climate Change (IPCC), tens of thousands of research papers, government policy and regulation around the world, financial standards for the world’s banks, along with much of the media coverage of climate change, from which most people learn about climate science and policy.
Longtime readers of THB, and the outlets where I published before, will know that my colleagues and I have called for the retirement of the extreme scenarios for almost a decade. Now it has happened and the fallout inevitably will be significant.
Back in 2019, building on the foundational work of my collaborator Justin Ritchie, he and I set about trying to document and make sense of how it came to be that the climate research community became locked in on scenarios that were fundamentally flawed and implausible — distorting our view of the climate future.
The result was a magnum opus paper, coming in at more than 21,000 words, published in 2021:
Pielke Jr, R., & Ritchie, J. (2021). Distorting the view of our climate future: The misuse and abuse of climate pathways and scenarios. Energy Research & Social Science, 72, 101890.
Our paper explains the many mistakes made in the development of climate scenarios that have come to have profound global impact, far beyond the science of earth system modeling.
As we quote one participant in the scenario development process:
“You simply do not realize that the RCPs can start a life of their own.”
I have received dozens of requests to help people better understand the significance of the extreme scenarios, why their retirement is such a big deal, and how we got into this mess in the first place. Today, I summarize Pielke and Ritchie 2021 with the remainder of this post, and for THB paid subscribers at the bottom I offer a PDF of the full text of the paper.
It is a remarkable story.
Scenarios Are the Foundation of Everything
Long-term projections of the climate future depend on much more than just math and physics— they are built upon a foundation of internally consistent stories about plausible futures, called scenarios. Scenarios project answers to question such as: How many people will there be? How wealthy will they be and in what occupations? What will power the economy? What technologies will we have for agriculture, industry, transportation and so on?
These scenarios result in projections of greenhouse gas emissions, land use and land cover, aerosols, and many other influences on the climate system that feed into earth system models, which produce projections of variables such as temperature, sea level, drought frequency, and storm intensity. The results of these projections are typically fed into still more models that project climate impacts, economic costs and benefits, and the possible consequences of alternative policy options.
Those projections do not stay in academic journals. Financial regulators in Europe, the United Kingdom, and the United States now require banks and insurers to conduct climate stress tests built directly on this work. Infrastructure engineers consult scenarios when setting design standards for roads, bridges, ports, and water systems. Urban planners use them to decide where to permit development. The insurance and reinsurance industry prices risk from them. Credit rating agencies assess sovereign and corporate debt through futures envisioned via scenarios. And much, much more.
When the scenarios are wrong, everything downstream is wrong too. They propagate into trillion-dollar investment decisions, regulatory frameworks, engineering specifications, and government policy.
Climate Modeling and Decision-Making Need Different Things
Our paper documents a central tension: climate modelers and real-world decision makers need fundamentally different kinds of scenarios, and the scenario development process has consistently prioritized the needs of researchers over the needs of decision makers.
Climate modelers want to employ a wide range of inputs to their earth system models — from very low to very high atmospheric greenhouse gas concentrations — so they can map the climate system’s response across a wide range of inputs. An extreme high-end scenario is very useful for research seeking to detect forced signals against the noise of natural variability, and for comparing model outputs across different research groups. Extreme scenarios generate large, clear changes that are easier to identify and analyze.
Whether that extreme scenario is plausible in the real world is, for the climate modeler’s specific technical purpose, largely beside the point.
Decision makers need something completely different. A city engineer designing a flood barrier, a central bank stress-testing a loan portfolio, an insurer pricing hurricane risk — all of them need scenarios grounded in the real world because that is where their decision making takes place.
They often want to have a credible baseline or “current policy” scenario that offers a defensible account of where the world — as it is today — is likely heading without major new policy interventions. Such a reference scenario should be connected to actual trends in energy technology, economic development, and demographics. For decision makers, an implausible scenario is worse than useless — they produce misleading numbers that could very easily lead to misdirected investment, distorted regulation, and flawed planning.
For most of the history of climate science, researchers managed this tension by developing socioeconomic scenarios first, as illustrated in the figure below. Demographers, technologists, economists and others together constructed internally consistent stories about future human society, those stories drove emissions projections, and those projections then fed into climate models as inputs.
The socioeconomic foundation came first; the physical climate science followed. That sequence mattered because it anchored the entire chain in a recognizable account of the human world — one that could at least be interrogated and debated by the engineers, planners, economists, and regulators who ultimately had to act on the results.
The Plausibility Vacuum
The creation of the Representative Concentration Pathway (RCP) scenarios, developed starting in 2005, broke that sequence. The design intent was a “parallel approach”: rather than waiting for socioeconomic scenarios to be built first, climate modelers would receive radiative forcing pathways immediately — atmospheric greenhouse gas concentrations specified over time — so they could begin their long, computationally expensive model runs without delay. The socioeconomic scenarios would follow later and — hopefully — plausibly lead to the radiative forcing trajectories that had already been adopted without consideration of their plausibility.
This approach created what Ritchie and I call a “plausibility vacuum.” Once radiative forcing was severed from its socioeconomic foundation, there was no longer any mechanism to ask whether a given forcing pathway was consistent with a plausible description of human society.
The RCPs floated free. Researchers using them to project impacts on agriculture, public health, ecosystems, and infrastructure could no longer be certain that the human world embedded in their scenarios bore any coherent relationship to the physical climate outcomes they were projecting. For research focused on better understanding the physical sciences of climate and climate change, the plausibility vacuum didn’t matter. But for everyone else, it did.
The parallel approach that created the plausibility vacuum was originally sold as a temporary measure. In practice it has been permanent — Consider that since the start of 2025 about 7,500 research articles have been published with RCP8.5.
When socioeconomic analysis finally caught up years later it revealed that the most commonly used scenario — the one whose creators labeled “business as usual” — required levels of coal consumption and population growth that were implausible. In fact, the integrated assessment models that produced the other three RCPs — RCP2.6, RCP4.5, and RCP6.0 — could not even reach a radiative forcing of 8.5 watts per square meter, under any set of input assumptions.
The selection of RCP8.5 as the highest priority scenario for climate research, and the only one designated as a reference or baseline scenario, was a fateful choice. Remarkably, climate scenario development in 2026 still takes place in a plausibility vacuum.
The RCPs Were Never Comparable with Each Other
The problems run deeper still, because the original four RCPs were not derived from a common framework. Each came from a different integrated assessment model — IMAGE, MiniCAM, AIM, and MESSAGE — which were developed by different research groups working from different assumptions about population, economics, technology, and land use. Each model had its own internal baseline, against which its own policy interventions produced lower forcing outcomes.
The four scenarios were never apples-to-apples. They were four different fruits from four different trees. Yet, over more than a decade and in tens of thousands of papers, RCP8.5 was treated as where the world was headed and the other three scenarios — but especially RCP4.5 and 2.6 — as a world with climate policy interventions. The 2018 U.S. National Climate Assessment treated RCP8.5 as a reference and RCP4.5 as policy success, both assumptions were wrong.
The RCP designers warned the community about this explicitly when the scenarios were being developed — the scenarios “cannot be treated as a set with consistent internal logic” and the high scenario “cannot be used as a no-climate-policy reference scenario for the other RCPs.” That warning has been comprehensively ignored.
The Scenario That Took Over
One of the four RCPs — RCP8.5, the highest — came to dominate the literature to a degree that is impossible to overstate. RCP8.5 accounted for more than half of all RCP references in the 2018 U.S. Fourth National Climate Assessment, nearly 60 percent in the IPCC’s Special Report on the Ocean and Cryosphere, and about a third of all RCP references in the IPCC Fifth Assessment Report.
By early 2020, researchers were publishing studies invoking RCP8.5 at a rate of roughly 20 per day. So far in 2026, studies using RCP8.5 (or its even more extreme successor, SSP5-8.5) are being published at a rate of ~30 new studies per day.
The dominance of RCP8.5 happened for reasons that are explainable and understandable, even if deeply pathological. Here are some of those factors, and certainly not an exhaustive accounting:
- When the four RCPs were published, only RCP8.5 was structured as a baseline — the other three were all constructed as policy intervention scenarios. Researchers who needed a no-policy reference had one option available.
- The IPCC, which assesses published literature, consequently emphasized RCP8.5 in its reports.
- Media coverage amplified the click-friendly, alarming projections from the RCP8.5 studies.
- Researchers may or may not have cared about plausibility of the scenarios that underlay their research, but for those whose careers depend on publication and visibility it was no doubt a feature not a flaw that RCP8.5 generated the most striking results attractive to journal editors, climate beat reporters, and university press offices.
Justin and I explain that no one need invoke bad faith or a conspiracy:
The bottom line is that scenario misuse involving the RCPs resulted from myriad factors coming together and reinforcing each other. They range from the ridiculously simple – the common naming scheme for the RCPs, to the incredibly complicated – the collapsing of complexity involved with the notion of baseline scenarios in methodologies of scenario planning, abuse of the scenario probability vacuum, to institutional dynamics – the IPCC assuming the role of orchestrating the very literature that its main function was simply to assess. As such the objective of understanding scenario misuse is not to apportion or assign blame, but to understand how such a pervasive and consequential failure of scientific integrity came to be on such an important topic, how it can be corrected and how it can be avoided in the future.
The Successor Scenarios Did Not Fix the Problem
The Shared Socioeconomic Pathways (SSPs), published in 2017 were supposed to restore the socioeconomic foundation that the RCP process had severed. They did so only partially and too late. The SSP development process actually confirmed that a forcing level of 8.5 W/m² “can only emerge under a relatively narrow range of circumstances” — language that should have triggered a fundamental reorientation of the research agenda. Instead, the desire for continuity with a decade of prior modeling work meant that SSP5-8.5 was designated the highest-priority scenario for the climate model experiments informing the IPCC Sixth Assessment.
The modeling community’s need for continuity — for results that can be compared to earlier model runs — trumped the policy community’s need for scenarios grounded in the real world. That is the dynamic the scenario development process has consistently reproduced, across multiple generations of scenarios.
The SSPs also introduced a new pathology. Researchers began mixing elements from incompatible scenarios — combining the grim, impoverished-world narrative of SSP3 with the extreme forcing level of RCP8.5 to construct a “chimera” scenario, SSP3-8.5, that the SSP developers themselves had flagged as implausible. Dozens of published studies now use this combination to explore worst-case impacts, generating projections built on a future world that no serious socioeconomic analysis supports.
Our paper goes into some detail on the SSPs. Those interested can see more there.
What Needs to Change
Course correction will be difficult. Enormous institutional momentum is well entrenched — thousands of published papers, active grants, ongoing IPCC cycles, regulatory frameworks already built on RCP8.5-derived projections. But the direction of change is clear, the only question is how long it’ll take to get back on course.
Over the week or so since I announced to the world that the extreme scenarios have been retired, my social media feed have been filled with many, including experts, with some version of “nothing to see here.” In a future post I’ll chonicle and correct the many false claims that are being spun about the retirement of the extreme scenarios.
There is much that needs to be done to correct course in climate science and policy. For the research community focused on meeting the needs of decision makers, near the top of the list: scenario development needs to be more frequent, more anchored to near-term policy-relevant time horizons, and more accurate about what real-world trends actually imply — something closer to how the International Energy Agency updates its scenarios annually in light of current conditions.
Most fundamentally, the needs of decision makers must be given equal weight to the needs of climate modelers in the scenario development process. Those two audiences require different things, and consistently privileging one over the other has produced a decade of science that is technically sophisticated and often policy-irrelevant, if not flat-out misleading.
Even better, exploratory climate research should be spun off from that focused on informing decision makers. We simply cannot kill two birds with one stone.
Why This Matters
Climate change is real. The risks are serious. The case for strong policy action does not depend on whether RCP8.5 is a plausible baseline — All of the arguments I made back in 2010 in The Climate Fix survive the retirement of extreme scenarios.
Ultimately, successful climate policies necessarily require broad public confidence in the integrity of research and demonstration that science is self-correcting. How the community responds now will go a long way to determining whether trust is deserved.
Our 2021 paper is a careful account of how the climate science community ended up in this mess — and what it would take to get out. We are not there yet.