I posted something a few days about the political sensitivity of communicating information about scientists' critical assessments of the performance of climate models.
In fact, such assessments are unremarkable. The development of forecasting models for complex dynamics (as Nate Silver explains in his wonderful book Signal and Noise) is an iterative process in which modelers fully expect predictions to be off, but also to become progressively better as competing specifications of the relevant parameters are identified and calibrated in response to observation.
In this sort of process at least, models are not understood to be a test of the validity of the scientific theories or evidence on the basic mechanisms involved. They are a tool for trying to improve the ability to predict with greater precision how such mechanisms will interact, a process the complexity of which cannot be reduced to a tractable, determine algorithm or formula. The use of modeling (which involve statistical techniques for simulating "stochastic" processes) can generate tremendous advances in knowledge in such circumstances, as Silver documents.
But such advances take time -- or, in an case, repeated trials, in which model predictions are made, results observed, and models recalibrated. In this recursive process, erroneous predictions are not failures; they are a fully expected and welcome form of information that enables modelers to pinpoint the respects in which the models can be improved.
Of course, if improvement fails to occur despite repeated trials and recalibrations, that's a serious problem. It might mean the underlying theory about the relevant mechanisms is wrong, although that's not the only possibility. There are phenomena that in their nature cannot be "forecast" even when their basic mechanisms are understood; earthquakes are probably an example--our best understanding of why they happen suggests we'll likely never be able to say when.
Usually, none of this causes anyone any concern. The manifest errors and persistent imprecision of earlier generations of models didn't stop meteorologists from developing what now are weather forecasting simulations that are a thing of wonder (but that are still being improved!). Our inability to say when earthquakes will occur doesn't cause us to conclude that they must be caused by sodomy rather than shifting tectonic plates after all--or from using the scientific knowledge we do have about earthquakes to improve our ability to protect ourselves from the risks they pose.
Nevertheless, on a culturally polarized issue like climate change, this iterative, progressive aspect of modeling does create an opportunity to generate public confusion. If one's goal is to furnish members of the public with reason to wonder whether the mechanisms of climate change are adequately understood--and to discount the need to engage in constructive action to minimize the the risks that climate change poses or the extent of the adverse impacts it could have for human beings--then one can obscure the difference between the sort of experimental "prediction" used to identify mechanisms and the sort of modeling "prediction" used to improve forecasting of the complex ("stochastic") interplay of such mechanisms. Then when the latter sort of models generate their inevitable--indeed, their expected and even welcome failures--one can pounce and say, "See? Even the scientists themelves are now having to admit they were wrong!"
Silver highlights this point in the chapter on Signal & Noise devoted to climate forecasting, and discusses (with sympathy as well as discernment) the difficult spot that this puts climate scientists and climate-risk communicators in.
As I discussed in my post, this dilemma was posed by an article in the Economist last week that reported on the state of scientific engagement with the performance of climate model predictions on the relationship between CO2 emissions and surface temperatures. Such engagement takes the form of debate -- or as Popper elegantly characterized it "conjecture and refutation," in which alternative explanations are competitively interrogated with observation in a way calculated to help isolate the more-likely-true from the vast sea of the plausible.
In fact, there was nothing in the article that suggested that the scientists engaged in this form of inquiry disagreed about the fundamentals of climate science. Or that any one of them dissents from the propositions that
(1) climate change (including, principally, global warming) is and has been occurring for decades as a result of human CO2 emissions;
(2) such change has already and will (irreversibly) continue to have various adverse impacts for many human populations; and
(3) the impacts will only be bigger and more adverse if CO2 emissions continue.
(These propositions, btw, don't come close to dictating what policy responses -- one or another form of "mitigation" via carbon taxes or the like; "adaptation" measures; or even geoengineering-- makes sense for any nation or collection of them.)
Maybe (1)-(3) are wrong?
I happen to think they are correct, a conclusion arrived at through my exercise of the faculties one uses to recognize what is known to science. My recognition faculties, of course, are imperfect, as are everyone else's, and, like everyone else's are less reliable in a polluted science communication environment such as the one that engulfs the climate change issue.
But the point is, whether those propositions are right or wrong isn't something that the debate reported on in the Economist article bears on one way or the other. The scientists involved in that debate agree on that. Any scientist or anyone else who disagrees about these propositions has to stake his or her case on things other than the performance of the latest generation of models in predicting surface temperatures.
Well, what to add to all of this?
Surveying responses to the Economist article, one will observe some skeptics (but in fact not all; I can easily find internet comments from skeptics who recognize that the debate described in the Economist article doesn't go to fundamentals) are nevertheless trying to cite the debate it describes as evidence that climate change does not pose risks that merit a significant policy response. They are trying to foster confusion, in other words, about the nature of the models that the scientists are recalibrating. Unsurprising.
But it is also clear that some climate-change policy advocates are responding by crediting that same misunderstanding of the models. These responses are denigrating the Economist article (which did not get the point I'm making about models wrong!) as a deliberate effort to mislead, and are defending the predictions of the previous generation of models as if the credibility of the case for adopting policies in response to climate change really does turn on whether the predictions of those models "are too!" correct.
I guess that's not surprising either, but it is depressing.
The truth is, most citizens on both sides of the climate debate are not forming their sense of whether and how our democracy should respond to climate change by following scientific debates over the precision of climate models.
What ordinary citizens do base their view of the climate change issue on is how others who share basic moral & cultural outlooks seem to regard it. The reason there is so much confusion about climate change in our society is that what ordinary citizens see when they take note of the climate change issue is those with whom they share an affinity locked in a bitter, recriminatory exchange with those who don't.
But all the same it is still a huge mistake for climate-change risk communicators to address these perfectly intelligent and perfectly ordinary citizens with a version of the scientific process here that evades and equivocates on, or outright denies that, climate scientists are engaged in model calibration.
In an open society--the only sort in which science can actually take place!--this form of normal science is plain to see. Indignantly denouncing those who accurately report that it's taking place as if they themselves were liars embroils those who are trying to communicate risk in a huge, disturbing spectacle ripe with all the information about "us vs. them" that makes communicating science here so difficult.
I admit that I believe that is wrong, in itself, to offer any argument in democratic debate that denies the premise that the person whom one is trying to persuade or inform merits respect as a self-governing individual who is entitled to use his or her reason to figure out what the facts are and what to do in response.
But I think it is not merely motivated reasoning on my part to think that the best strategy for countering those who would distort how science works is to offer a reasoned critique of those doing the distorting--not to engage in countervailing distortion.
One reason I believe that is that I have in fact seen evidence of it being done effectively.
Check out Zeke Hausfather's very nice discussion of the issue at the Yale Forum on Climate Change and the Media. It was written before the publication of the Economist article, but my attention was drawn to it by Skeptical Science, which discerningly that its thoughtful discussion of the recent debate furnishes a much more constructive response to the Economist news report than an attempt to deny that scientists are doing what scientists do.