"Messaging" scientific consensus: ruminations on the external validity of climate-science-communication studies, part 2
Tuesday, June 17, 2014 at 9:28AM
Dan Kahan

This is the second installment of a set on "external validity" problems in climate-science communication studies.

"Internal validity" refers to qualities of the design that support drawing inferences about what is happening in the study. "External vality" refers to qualities of the design that support drawing inferences from the study to the real-world dynamics it is supposed to be modeling.

The exernal validity problems I want to highlight don't affect only the quality of studies. They affect the quality of the practice of climate-science communication, too, because communicators are relying on externally invalid studies for guidance.

The last entry concerned the use of surveys to measure public opinion on climate change.

This one addresses experimental and other evidence used to ground "social marketing campaigns" that feature scientific consensus.  It is also only the first of two on "messaging" scientific consensus; the next, which I'll post "tomorrow," will examine real-world "messaging" that purports to implement these study findings.

This post, like the last, is from a paper that I'm working on and will post soon (one with some interesting new data, of course!)

* * *

5. “Messaging” scientific consensus

a. The “external validity” question. On May 16, 2013, the journal Environmental Research Letters published an article entitled “Quantifying the consensus on anthropogenic global warming in the scientific literature.” In it, the authors reported that they had reviewed the abstracts of 12,000 articles published in peer-reviewed science journals between 1991 and 2011 and found that “among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming” (Cook et al. 2013).

“This is significant,” the lead author was quoted as saying in a press statement issued by his university, “because when people understand that scientists agree on global warming, they’re more likely to support policies that take action on it.” “Making the results of our paper more widely-known,” he continued, “is an important step toward closing the consensus gap”—between scientists who agree with one another about global warming and ordinary citizens who don’t—“and increasing public support for meaningful climate action” (Univ. Queensland 2013).

The proposition that disseminating the results of ERL study would reduce public conflict over climate change was an empirical claim not itself tested by the authors of the ERL paper.  What sorts of evidence might one use (or have used) to assess it?

Opinion surveys are certainly relevant.  They show, to start, that members of the U.S. general public— Republican and Democrat, religious and nonreligious, white and black, rich and poor—express strongly pro-science attitudes and hold scientists in high regard (National Science Foundation 2014, ch. 7; Pew 2009). In addition, no recognizable cultural or political group of consequence in American political life professes to disagree with, or otherwise dismiss the significance of, what scientists have to say about policy-relevant facts. On the contrary, on myriad disputed policy issues—from the safety of nuclear power  to the effectiveness of gun control—members of the public in the U.S. (and other liberal democratic nations, too) indicate that the position that predominates in their political or cultural group is the one consistent with scientific consensus (Kahan, Jenkins-Smith & Braman 2011; Lewendowsky, Gignac & Vaugh 2012).

Same thing for climate change. As the ERL authors noted, surveys show a substantial proportion of the U.S. general public rejects the proposition that there is “scientific consensus” on the existence and causes of climate change. Indeed, the proportion that believes there is no such consensus consists of exactly the same proportion that says it does not “believe in” human-caused global warming (Kahan et al. 2011).

So, the logic goes, all one has to do is correct the misimpression of that portion of the public. Members of the public very sensibly treat as the best available evidence what science understands to be the best available evidence on facts of policy significance. Thus, “when people understand that scientists agree on global warming, they’re more likely to support policies that take action on it” (Univ. Queensland 2013).

But there is still more evidence, of a type that any conscientious adviser to climate-science communicators would want them to consider carefully. That evidence bears directly on the public-opinion impact of “[m]aking the results” of studies like the ERL one “more widely-known” (Univ. Queensland 2013).

The ERL study was not the first one to “[q]uantify[]the consensus on anthropogenic global warming”; it was at least the sixth, the first one of which was published in Science in 2004 (Oreskes 2004; Lichter 2008; Doran & Zimmerman 2009; Anderegg et al. 2010; Powell 2012).  Appearing on average once every 18 months thereafter, these studies, using a variety of methodologies, all reached conclusions equivalent to the one reported in ERL paper.

Like the ERL paper, moreover, each of these earlier studies was accompanied by a high degree of media attention. 

Indeed, the “scientific consensus” message figured prominently in the $300 million social marketing campaign by Alliance for Climate Protection, the advocacy group headed by former Vice President Al Gore, whose “Inconvenient Truth” documentary film and book both prominently featured the 2004 “97% consensus” study published in Science (which was characterized by Gore as finding that "0%" of peer-reviewed climate science articles disputed the human contribution to global warming). 

An electronic search of major news sources indicates finds over 6,000 references to “scientific consensus” and “global warming” or “climate change” in the period from 2005 to May 1, 2013.

There is thus a straightfroward way to assess the prediction that “[m]aking the results” of the ERL study “more widely-known” can be expected to influence public opinion.  It is to examine how opinion varied in relation to efforts to publicize these earlier “scientific consensus” studies. 

Figure 9 plots the proportion of the U.S. general public who selected “human activities” as opposed to “natural changes in the environment” as the main cause of “increases in the Earth’s temperature over the last century” over the period 2003 to 2013 (in this Gallup item, there is no option to indicate rejection of the premise that the earth’s temperature has increased, a position a majority or near majority of Republicans tend to selection when it is available). The year in which “scientific consensus” studies appeared is indicated on the x-axis, as is the year in which “Inconvenient Truth” was released.   


Nothing happened.

Or, in truth, a lot happened.  Many additional important scientific studies corroborating human-caused global warming were published during this time.  Many syntheses of the data were issued by high-profile institutions in the scientific community, including the U.S. National Academy of Sciences, the Royal Society, and the IPCC, all of which concluded that human activity is heating the planet. High-profile, and massively funded campaigns to dispute and discredit these sources were conducted too.  People endured devastating heat waves, wild fires, and hurricanes, punctuated by long periods of weather normality.  The Boston Red Sox won their first World Series title in over eight decades.

It would surely be impossible to disentangle all of these and myriad other potential influences on U.S. public opinion on global warming.  But one doesn’t need to do that to see that whatever the earlier scientific-consensus "messaging" campaigns added did not “clos[e] the consensus gap” (Univ. Queensland 2013). 

Why, then, would any reflective, realistic person counsel communicators to spend millions of dollars to repeat exactly that sort of “messaging” campaign? 

The answer could be laboratory studies. One (Lewendowsky et al. 2012), published in Nature Climate Change, reported that the mean level of agreement with the proposition “CO2 emissions cause climate change” was higher among subjects exposed to a “97% scientific consensus” message than among subjects in a control condition (4.4 vs. 4.0 on a 5-point Likert scale).  After being advised that “97% of scientists” accept  CO2 emissions increase global temperatures, those subjects also formed a higher estimate of the proportion of scientists who believe that (88% vs. 67%).

Is it possible to reconcile this result with the real-world data on the failure of previous “scientific consensus” messaging campaigns to influence U.S. public opinion?  The most straightforward explanation would be that the NCC experiment was not externally valid—i.e., it didn’t realistically model the real-world dynamics of opinion-formation relevant to the climate change dispute. 

The problem is not the sample (90 individuals interviewed face-to-face in Perth, Australia). If researchers were to replicate this result using a U.S. general population sample, the inference of external invalidity would be exactly the same. 

For “97% consensus” messaging experiments to justify a social marketing campaign featuring studies like the ERL one, it would have to be reasonable to believe that what investigators are observing in laboratory conditions—ones created specifically for the purpose of measuring opinion—tell us what is likely to happen when communicators emphasize the “97% consensus” message in the real world. 

Such a strategy has already been tried in the real world.  It didn’t work.

There are, to be sure, many more things going on in the world, including counter-messaging,  than are going on in a “97% consensus” messaging experiment.  But if those additional things account for the difference in the results, then that is exactly why that form experiment must be regarded as externally invalid: it is omitting real-world dynamics that we have reason to believe, based on real-world evidence, actually matter in the real world.

On this account, the question to be investigated is not whether a “97% consensus” messaging campaign will influence public opinion but why it hasn’t over a 10-year trial.  The answer, presumably, is not that members of the public are divided on whether they should give weight to the conclusions scientists have reached in studying risks and other policy relevant facts. Those on both sides of the climate change believe that the other side’s position is the one in consistent with scientific consensus. 

The ERL authors’ own recommendation to publicize their study results presupposes public consensus in the U.S. in support of using the best available scientific evidence in policymaking.  The advice of those who continue to champion “97% consensus” social marketing campaigns does, too. 

So why have all the previous highly funded efforts to make “people understand that scientists agree on global warming” so manifestly failed to “close the consensus gap” (Univ. Queensland 2013)?

There are studies that seek to answer exactly that question as well.  They find that culturally biased assimilation—the tendency of people to fit their perceptions of disputed facts to ones that predominate in their cultural group—applies to their assessment of evidence of scientific consensus just as it does to their assessment of all other manner of evidence relating to climate change (Corner, Whitmarsh & Dimitrios 2012; Kahan et al. 2011). 

When people are shown evidence relating to what scientists believe about a culturally disputed policy-relevant fact (e.g., is the earth heating up? is it safe to store nuclear wastes deep underground? does allowing people to carry hand guns in public increase the risk of crime—or decrease it?), they selectively credit or dismiss that evidence depending on whether it is consistent with or inconsistent with their cultural group’s position. As a result, they form polarized perceptions of scientific consensus even when they rely on the same sources of evidence.

These studies imply misinformation is not a decisive source of public controversy over climate change.  People in these studies are misinforming themselves by opportunistically adjusting the weight they give to evidence based on what they are already committed to believing.  This form of motivated reasoning occurs, this work suggests, not just in the climate change debate but in numerous others in which these same cultural groups trade places being out of line with the National Academy of Sciences’ assessments of what “expert consensus” is.

To accept that this dynamic explains persistent public disagreement over scientific consensus on climate change, one has to be confident that these experimental studies are externally valid.  Real world communicators should definitely think carefully about that.  But because these experiments are testing alternative explanations for something we clearly observe in the real world (deep public division on climate change), they don’t suffer from the obvious defects of studies that predict we should already live in world we don’t see.

Part 3

References

Anderegg, W.R., Prall, J.W., Harold, J. & Schneider, S.H. Expert credibility in climate change. Proceedings of the National Academy of Sciences 107, 12107-12109 (2010).

Cook, J., Nuccitelli, D., Green, S.A., Richardson, M., Winkler, B., Painting, R., Way, R., Jacobs, P. & Skuce, A. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters 8, 024024 (2013).

Corner, A., Whitmarsh, L. & Xenias, D. Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Climatic Change 114, 463-478 (2012).

Doran, P.T. & Zimmerman, M.K. Examining the Scientific Consensus on Climate Change. Eos, Transactions American Geophysical Union 90, 22-23 (2009).

Farnsworth, S.J. & Lichter, S.R. Scientific assessments of climate change information in news and entertainment media. Science Communication 34, 435-459 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Lewandowsky, S., Gignac, G.E. & Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change 3, 399-404 (2012).

Lichter, S. Robert. Climate Scientists Agree on Warming, Disagree on Dangers, and Don't Trust the Media's Coverage of Climate Change. Statistical Assessment Service, George Mason University (2008).

National Science Foundation. Science and Engineering Indicators (Wash. D.C. 2014), available at http://www.nsf.gov/statistics/seind14/index.cfm/chapter-7/c7s3.htm.

Oreskes, N. The scientific consensus on climate change. Science 306, 1686-1686 (2004).

Pew Research Center for the People & the Press. Public praises science; scientists fault public, media (Pew Research Center, Washington D.C., 2009).

Powell, J. Why Climate Deniers Have No Scientific Credibility - In One Pie Chart. DESMOGBLOG.com (2012).

Univ. Queensland. Study shows scientists agree humans cause global-warming (2013). Available at http://www.uq.edu.au/news/article/2013/05/study-shows-scientists-agree-humans-cause-global-warming.

Update on Tuesday, July 1, 2014 at 3:20PM by Registered CommenterDan Kahan

If you were stumped in tyring to find the secret message encoded in this post, you'll want to read  Climate Science Communication & the Measurement Problem  (it's encoded there-- just do the obvious thing: number every paragraph consecutively & then take first letter of each one the number of which appears in the Fibonacci sequence!) 

Article originally appeared on cultural cognition project (http://www.culturalcognition.net/).
See website for complete article licensing information.