An article from The Economist reports on ferment within the climate-modeling community over how to account for the failure of rising global temperatures to keep pace with increasing carbon emissions.
“Over the past 15 years air temperatures at the Earth’s ssurface have been flat while greenhouse-gas emissions have continued to soar,” the article states.
The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO₂ put there by humanity since 1750. And yet, as James Hansen, the head of NASA’s Goddard Institute for Space Studies, observes, “the five-year mean global temperature has been flat for a decade.”
“[S]urface temperatures since 2005 are already at the low end of the range of projections derived from 20 climate models,” the article continues. “If they remain flat, they will fall outside the models’ range within a few years.”
Naturally, “the mismatch between rising greenhouse-gas emissions and not-rising temperatures is among the biggest puzzles in climate science just now.” Professional discourse among climate scientists is abuzz with competing conjectures: from the existing models’ uniform underestimation of historical temperature variability to the greater heat-absorptive capacity of the oceans to the still poorly understood heat-reflective properties of clouds
There are lots of things one could say. But here are three.
First, this kind of collective reassessment is not a sign that there’s any sort of defect or flaw in mainstream climate science. What the article is describing is not a crisis of any sort; it is “normal” — as in perfectly consistent with the sort of backing and filling that characterizes the “normal science” mission of identifying, interrogating, and resolving anomalies on terms that conserve the prevailing best understanding of how the world works.
It is perfectly unremarkable in particular for the project of statistical modeling of dynamic processes to encounter forecasting shortfalls of this kind and magnitude. Model building is inherently iterative. Modelers greet incorrect predictions not as “disconfirming” evidence of their basic theory — as might, say, an experimenter who is testing competing conjectures about the how world works–but as informative feedback episodes that enable progressively more accurate discernment and calibration of the parameters of an equation (in effect) that can be used to make the implications of that theory discernable and controllable.
Or in any case, this is how things work when things are working. One expects, tolerates, and indeed exploits erroneous forecasts so long as one is making progress and doesn’t “give up” unless and until the persistence or nature of such errors furnishes a basis for questioning the fundamental tenets of the model– the basic picture or theory of reality that it presupposes–at which point the business of modeling must be largely suspended pending discernment by empirical means of a more satisfactory account of the basic mechanisms of the process to be modeled.
Which gets me to my second point: the sorts of difficulties that climate modelers are encountering aren’t anywhere close to the kinds of difficulties that would warrant the conclusion that their underlying sense of how the climate works is unsound. Indeed, nothing in the discrepancy between modeling forecasts and the temperature record of the last decade suggests reason to revise the assessment that the impact of human carbon emissions poses a serious danger to human wellbeing that it is essential to address–a point the Economist article (an outstanding piece of science journalism, in my estimation) is perfectly clear about.
Indeed, if anything, one might view the apparent need to revise downward slightly the range of likely global temperature increases associated with past and anticipated CO₂ emissions as reason to believe that there might be more profit to be had in investing in mitigation, which recent work, based on existing models about the expected warming impact of previous and likely emissions, suggested would be unlikely to avert catastrophic impacts in any case.
Yet here is the third & most troubling point: communicating the significance of these unremarkable shortingcomings will pose a tremendous political challenge.
The Economist article–which in my view is an excellent piece of science journalism–doesn’t address this particular issue. But Nate Silver insightfully does in his book The Signal and the Noise.
Like much about Bayesian inference, the idea that being wrong can be as informative as (and often even more informative than) being right doesn’t jibe well with normal intuitions.
But for climate change in particular, this difficulty is aggravated by a communication strategy that renders the admission of erroneous prediction extremely perilous. Climate change poses urgent risks. But as Sliver points out, the urgent attention it warrants has been purchased in significant part with the currency of emphatic denunciation and ridicule of those who have questioned the existing generation of forecasting models.
No doubt this element of the climate risk communication strategy was adopted in part out of a perception of perceived political necessity. By no means all who have raised questions about those models have done so in bad faith; indeed, because it is only through the competitive interplay of competing conjecture that anything is ever learned in science, those who doubt successful theories make a necessary contribution to their vindication.
But still, many of those actors–mainly nonscientists–who have been most conspicuous in questioning the past generation of models clearly were intent on sowing confusion and division. They were acting on bad faith motivations. To discredit them, climate risk communicators have repeatedly pointed out that the models these actors were attacking were supported by scientific consensus.
Yet now these critics stand to reap a huge political, rhetorical windfall as climate scientists appropriately take stock of the shortcomings in the last generation of models.
Again, such reappraisal doesn’t mean that the theory underlying those models was incorrect or that there isn’t an urgent need to act to protect ourselves from cliamte change risks. Modeling errors are inevitable, foreseeable, and indeed informative.
But because the case for crediting that theory and taking account of those risks was staked on the inappropriateness of challenging the accuracy of scientific consensus, climate advocates will find themselves on the defensive.
What to do?
The answer is not simple, of course.
But at least part of it is to avoid unjustified simplification.
Members of the pulbic, it’s true, aren’t scientists; that’s what makes science communication so challenging.
But they aren’t stupid, either. That’s what makes resorting to “simplified” claims that aren’t scientifically defensible or realistic a bad strategy for science communication.