Why Staying Under 2°C Was Never Physically Plausible

Semantic connectivity: sparsely connected

#infographic

Abstract

For the last two decades, global climate policy has revolved around a simple headline number: keep warming “well below 2 °C” and preferably 1.5 °C. That sounds like a clear plan. The awkward reality is that staying under 2 °C was never physically plausible on the trajectory we were actually on, once you look at the basic physics and the modelling assumptions underneath the glossy scenarios.

Why Staying Under 2°C Was Never Physically Plausible — explainer video

This essay uses James Hansen’s recent “Colorful Chart” – a plot of greenhouse‑gas forcing over time – as a starting point. The chart is not just a picture of how much we’ve warmed; it’s a picture of how quickly we have been turning up the heating, and how that rate of change is now incompatible with the way “2 °C” was sold to the public and to policymakers [ @Hansen2025Colorful ].

The key question here is not only “why is 2 °C out of reach?” but a more civic one: “how did we end up with billions of dollars of policy, activism, and even radical politics oriented around a target that was structurally impossible under its own stated constraints?”

The short version:

  • The physics (radiative forcing and cumulative emissions) put hard limits on what is achievable.
  • The 2 °C target emerged as a convenient political‑scientific compromise, not as a robust scientific feasibility statement [ @Randalls2010 ].
  • The models (integrated assessment models, IAMs) that made 2 °C look possible did so by leaning heavily on speculative “negative emissions” later in the century [ @AndersonPeters2016; @Fuss2016; @Gambhir2019 ].
  • Once you take seriously the limits of those negative emissions, and the actual observed forcing trajectory, staying below 2 °C with any decent probability was never in the cards [ @Larkin2018; @VaughanGough2016 ].

That doesn’t tell us what we should do. But it matters for how we understand where we are: we are not simply “failing to follow the plan”. We were working from a plan that couldn’t happen as advertised.

#slide_deck


1. The physical baseline: how fast are we turning up the heating?

The starting point is very simple physics.

Greenhouse gases trap additional heat in the climate system. We usually describe this in terms of radiative forcing – extra watts of energy per square metre of Earth’s surface compared to pre‑industrial conditions.

NOAA’s Annual Greenhouse Gas Index (AGGI) compresses this into a single number: the ratio of today’s long‑lived greenhouse‑gas forcing to its value in 1990 (the Kyoto / first IPCC assessment baseline). An AGGI of 1.0 means “same forcing as 1990”; an AGGI of 1.5 means “50 % more warming influence than in 1990” [ @NOAA_AGGI ].

  • In 2023, the AGGI reached 1.51 – a 51 % increase in effective radiative forcing from long‑lived greenhouse gases since 1990 [ @NOAA_AGGI ].
  • It took roughly 240 years (from the Industrial Revolution to 1990) to go from an AGGI of 0 to 1.
  • It took only 33 years to go from 1 to 1.51.

So the modern story is not just “there’s a lot of CO₂ in the air”. It is: the rate at which we are adding heat to the system has itself gone up sharply, and we did that precisely during the era when we were supposedly organising global policy around staying below 2 °C.

The World Meteorological Organization’s latest greenhouse‑gas bulletin gives another angle on the same reality. By 2024:

  • Globally averaged CO₂ reached 423.9 ± 0.2 ppm, about 152 % of the pre‑industrial level [ @WMO2025GHG ].
  • Methane and nitrous oxide also reached record highs, further increasing radiative forcing [ @WMO2025GHG ].

On the temperature side, the Indicators of Global Climate Change initiative (IGCC) compiles multiple datasets to estimate human‑caused warming:

  • For the 2014–2023 decade, human‑induced warming is about 1.19 °C [1.0–1.4 °C] above 1850–1900 [ @Forster2024IGCC ].
  • The 2024 update finds that the 2015–2024 decade is already around 1.24 °C warmer than pre‑industrial, with individual recent years flirting with or exceeding the 1.5 °C level for short periods [ @Forster2025IGCC ].

Put that together:

We have already used up most of the “distance” between pre‑industrial temperatures and 2 °C, while the forcing itself is still rising quickly.

Any story in which we glide smoothly to a hard 2 °C ceiling from here has to do something very non‑obvious to the physics.


2. What the “Colorful Chart” actually shows

Hansen’s “Colorful Chart” is a compact way of seeing exactly that problem [ @Hansen2025Colorful ].

Very loosely, it shows:

  • Horizontal axis: time, from the late 19th century to the present.
  • Vertical axis: total greenhouse‑gas forcing relative to 1750.
  • Coloured bands: contributions from CO₂, methane, nitrous oxide, and other gases.
  • Key feature: the recent slope – the rate at which forcing is increasing.

In their note, Hansen and co‑authors emphasise that over the last 15 years or so the growth rate of greenhouse‑gas forcing has jumped to about 0.5 W/m² per decade [ @Hansen2025Colorful ]. That might not sound like much, but in climate terms it is extremely rapid:

  • In the mid‑20th century, adding 0.5 W/m² of forcing took many decades.
  • We are now doing that per decade while already sitting at over 420 ppm CO₂ [ @WMO2025GHG ].

This matters for 2 °C because global temperature responds roughly linearly to cumulative CO₂ emissions on policy‑relevant timescales. Higher forcing for longer implies more cumulative emissions and more committed warming.

The colourful chart is, in effect, a visual reminder that:

  • We did not stabilise forcing and then ask “can we stay below 2 °C?”
  • We accelerated forcing while promising ourselves that 2 °C was still “on the table”.

If your target is to cap warming below 2 °C, and your forcing curve is still bending upwards, then any model that says “this is fine” must be doing something aggressive with future emissions.

That “something” turns out to be large‑scale negative emissions and temporary overshoot.


3. Where did the 2 °C target actually come from?

Politically, 2 °C is often presented as if it dropped out of a scientific optimisation – scientists calculated a “safe” level of warming; policymakers dutifully adopted it. The real history is messier.

Samuel Randalls’ review of the history of the 2 °C climate target traces it back to a mix of:

  • Early heuristics in the 1970s and 1980s, where scientists and economists tried to turn fuzzy ideas of “dangerous interference” into numbers [ @Randalls2010 ].
  • The German WBGU advisory council in the 1990s, which popularised 2 °C as a “guardrail” beyond which impacts would become unacceptably large.
  • The European Union’s 1996 decision to adopt 2 °C as a policy benchmark, well before there was detailed global emissions modelling to back up its feasibility [ @Randalls2010 ].
  • The Copenhagen Accord (2009) and later the Paris Agreement (2015), which elevated 2 °C (and then 1.5 °C) to the central language of global climate governance.

Randalls’ key point is that 2 °C functioned as a political‑scientific compromise:

  • Scientists wanted something that captured the idea of a threshold of “dangerous” climate change.
  • Policymakers wanted a simple, communicable number that could anchor negotiations.
  • Neither side really wanted to own the question “is this practically achievable under current trajectories?” [ @Randalls2010 ].

In social‑science language, 2 °C became a “boundary object”: flexible enough to mean slightly different things to different communities, but stable enough to coordinate activity.

Crucially:

2 °C was never a promise that there existed real‑world, timely mitigation pathways that could keep us below that line under plausible politics and technology. It was a symbol that later got retrofitted with model scenarios.

Those scenarios are where the story gets interesting.


4. How integrated assessment models made 2 °C look feasible

When the IPCC and others want to know “what would it take to meet 2 °C?”, they turn to integrated assessment models (IAMs). These are large optimisation models that combine simple climate physics with economic and energy‑system representations, solving for least‑cost pathways under various constraints [ @Gambhir2019 ].

Very schematically, an IAM does something like this:

  1. Assume trajectories for population, GDP, technology costs, etc.
  2. Define a temperature or concentration target (e.g. “keep warming below 2 °C with 66 % probability”).
  3. Let the model choose an optimal mix of technologies, investments, and sometimes behaviour changes to meet that constraint at minimum cost.

If you ask such a model “please meet 2 °C, starting from the late 2000s, but don’t force emissions cuts that are too abrupt and don’t crash the assumed economic growth path”, it has a straightforward way to make the maths work:

  • Delay strong mitigation,
  • Overshoot the temperature or concentration target,
  • Then deploy very large amounts of negative emissions technologies (NETs) in the second half of the century to pull CO₂ back out.

The most important of these NETs in the models is bioenergy with carbon capture and storage (BECCS) – burning biomass for energy, capturing the CO₂, and storing it underground.

4.1 The scale of negative emissions in 2 °C scenarios

Sabine Fuss and colleagues reviewed negative‑emissions assumptions in AR5‑era 2 °C scenarios. In those scenarios, by 2100:

  • BECCS alone typically delivers 3.7–12.1 GtCO₂ per year of net removals [ @Fuss2016 ].
  • That is comparable to, or larger than, today’s global emissions from the entire power sector.
  • Other NETs (afforestation, soil carbon, direct air capture etc.) add further potential but are much less represented in the models [ @Fuss2016 ].

So when a summary figure in an IPCC report shows a cluster of smooth 2 °C pathways, a huge hidden assumption is:

“By late century we will be running an industrial negative‑emissions sector roughly the size of the current fossil‑fuel system, on top of fully decarbonising everything else.”

Kevin Anderson and Glen Peters’ Science commentary, The trouble with negative emissions, argued in 2016 that this was not a harmless modelling detail but a profound ethical and risk problem [ @AndersonPeters2016 ]. If you build policy around those scenarios and the negative emissions don’t materialise, you have:

  • Spent decades emitting more CO₂ than you otherwise would,
  • Locked in higher committed warming,
  • And shifted a larger mitigation burden onto future generations.

Subsequent work has only hardened that critique:

  • Expert elicitation studies find that the assumed scale and speed of BECCS deployment in IAMs are widely seen as unrealistic, given land, water, governance, and social‑licence constraints [ @VaughanGough2016; @Fuss2016 ].
  • Andrew Larkin and colleagues ask directly: what if negative emission technologies fail at scale? They show that, for major emitters, 2 °C rapidly becomes infeasible under even weak equity assumptions, absent massive early mitigation [ @Larkin2018 ].
  • Ajay Gambhir and co‑authors review criticisms of IAMs, highlighting over‑reliance on particular technologies, opaque assumptions, and poor representation of political and behavioural realities [ @Gambhir2019 ].

In other words: the models made 2 °C look doable by building in a technology and governance miracle mid‑century.

4.2 The structural biases inside the models

This is not (mostly) about bad faith. It’s about the structure and incentives baked into IAMs:

  • They typically use discounted economic cost as the objective function. High discount rates make it “optimal” to do less mitigation now and more later; negative emissions are the perfect tool for that.
  • They have relatively coarse representations of politics, institutions, and social conflict. So if the model wants 500 million hectares of land for bioenergy, there is no built‑in “global land rights uprising” module to push back.
  • Scenario‑comparison protocols (e.g. for IPCC assessments) historically rewarded producing many numerically consistent pathways, not stress‑testing their physical or political plausibility [ @Gambhir2019 ].

Once 2 °C became the headline number, and once IAMs became the canonical way to interrogate it, it was almost unavoidable that we would end up with a library of “2 °C pathways” that worked on paper but depended on an extraordinary degree of future negative emissions.


5. The divergence between scenarios and the real world

Now put the pieces together:

  • We have observed forcing and temperatures, which tell us how much of the 2 °C “space” we’ve already used.
  • We have scenario libraries that assume huge negative emissions later to keep 2 °C nominally alive.
  • We have growing evidence that those negative emissions cannot be deployed at the required scale without running into hard physical, ecological, and social limits [ @Fuss2016; @VaughanGough2016 ].

On the physical side:

  • Human‑induced warming is already around 1.2–1.3 °C over the last decade [ @Forster2024IGCC; @Forster2025IGCC ].
  • The AGGI has climbed to 1.51, meaning the warming influence of long‑lived greenhouse gases is now 51 % higher than in 1990 [ @NOAA_AGGI ].
  • CO₂ alone is at 423.9 ppm, 152 % of pre‑industrial levels, with the largest year‑on‑year increase in the instrumental record between 2023 and 2024 [ @WMO2025GHG ].

This is not the profile of a system gently bending towards stabilisation. It is the profile of a system where:

  • The budget framing (“we have X GtCO₂ left for 2 °C”) becomes increasingly academic, because the path we have actually followed assumes future removals that are not credible.
  • Even if we instantly adopted the most ambitious mitigation policies ever attempted, we would be doing so from a starting point that is already very close to 1.5 °C and on a trajectory towards 2 °C.

That is what Hansen’s “Colorful Chart” makes visually uncomfortable: the slope of the forcing curve in recent decades is simply not compatible with the easy, non‑disruptive 2 °C narratives that people were sold [ @Hansen2025Colorful ].


6. Why the impossibility was opaque to most people

If all of this is, in principle, transparent – AGGI values are public, IGCC updates are open access, the BECCS numbers are in the literature – why did so many smart, engaged people operate for years as if 2 °C was a realistic guardrail?

A few structural reasons:

6.1 Division of labour and information loss

The climate knowledge system is stratified:

  • Physical climate scientists track forcing, temperatures, and feedbacks.
  • Integrated assessment modellers turn socio‑economic and climate assumptions into emission pathways.
  • Policy communities and NGOs consume the high‑level summaries and graphics.
  • The broader public mostly sees single numbers and stylised charts.

At each interface, detail is stripped away:

  • The fact that AGGI jumped from 1 to 1.5 in just three decades is obvious if you stare at the NOAA time series [ @NOAA_AGGI ], but it rarely appears in policy briefs.
  • The fact that “2 °C pathways” in AR5 involve 3–12 GtCO₂/yr of BECCS in 2100 [ @Fuss2016 ] is clear in the underlying tables, but invisible if all you see is a single blue line labelled “2 °C pathway”.

People were not directly lied to; they were shielded from the assumptions in ways that made over‑optimistic interpretations easy.

6.2 The “central scenario” trap

Humans (and institutions) have a strong habit:

  1. Choose a target (2 °C).
  2. Build models constrained to hit that target.
  3. Treat the resulting trajectories as evidence that the target is feasible.

That pattern is backwards. If you tell a model “you must meet 2 °C” it will, by construction, find some way to do that – even if the way involves clearly heroic assumptions about future technologies, or socio‑political stability that has no precedent.

Critiques like Anderson & Peters’ Science piece were warning that the pathway is the proof, not the existence of a line on a graph [ @AndersonPeters2016 ].

6.3 Economic framings that favour delay

Discounted‑cost optimisation has a systematic bias: it makes future action cheap and present action expensive. In that frame:

  • Large negative emissions “later” look like a cost‑effective solution.
  • Very rapid emissions cuts “now” look like economic self‑harm.

So the scenarios that hit 2 °C with minimal near‑term disruption naturally float to the top of the ensemble. But that is a statement about the model’s objective function, not about the real world’s technological or institutional capacity.


7. What this means in civic terms

There are two easy, but unhelpful, ways to read all this:

  • “We were lied to, it’s all a scam.”
  • “The models were wrong, so nothing is knowable.”

Neither is the point.

A more grounded reading is:

  • The 2 °C target was always a compromise object – politically useful, but only loosely connected to a physically grounded feasibility assessment [ @Randalls2010 ].
  • The modelling frameworks that were then used to back‑fill feasibility were structurally incentivised to rely on future negative emissions [ @AndersonPeters2016; @Fuss2016; @Gambhir2019 ].
  • The actual trajectory of greenhouse‑gas forcing and temperature over the last 30 years has diverged far enough from what would have been needed that “staying below 2 °C” in the intuitive sense is no longer on the table [ @Hansen2025Colorful; @Forster2025IGCC; @NOAA_AGGI; @WMO2025GHG ].

From a civic‑education perspective, the key lesson is not that “we are doomed”. It is that:

When targets, models, and physical reality drift apart, public debate can spend a decade arguing over plans that were never internally coherent.

That matters for how we approach the next round of targets:

  • If we talk about “1.5 °C” or “climate neutrality by 2050” without asking which models say that is possible, with which assumptions about overshoot, negative emissions, and equity, we risk repeating the 2 °C story at a higher temperature.
  • If we want better decisions, we need transparent modelling where key assumptions (about NETs, discounting, political feasibility) are visible and contestable to non‑specialists.

The uncomfortable but necessary move is to accept:

  • We will almost certainly overshoot the earlier guardrails in a sustained way.
  • There is no physically plausible path back to a pre‑crisis climate.
  • Yet the range of futures is still very wide – in terms of total warming, regional impacts, and the distribution of harm and adaptation.

Living with that reality requires a different kind of politics than the story in which “we just have to hit 2 °C”. It means admitting, openly, that what we were told was possible under that banner was never actually consistent with the physical and technological constraints that were already visible at the time.

That admission is not an endpoint; it’s the minimum condition for doing honest climate politics from here.

Math details

Fragment size
3 atoms
Semantic closure
3 atoms

Semantic atoms by kind:

Author
1
Published
1
Published Year
1

Evaluation score is computed as: score = #tags + 0.5·#links + 0.3·#facts + 0.1·#temporal_years

Tags
0
Links
0
Facts
0
Temporal years
0
Score
0.00

Full data: JSON.