This semester I'm teaching a course entitled the Science of Science Communication. I have posted general information and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this fourth such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general.
0. What are we talking about now and why?
"Democratic self-government" consists in one or another set of procedures for translating collective preferences into public policy. Such a system presupposes that citizens’ preferences are diverse—or else there’d be no need for this elaborate mechanism for aggregating them. But such a system also presupposes that citizens have a common interest in making government decisionmaking responsive to the best available evidence on how the world works—or else there’d be no reliable link between the policies enacted and the popular preferences that democratic processes aggregate.
On the basis of this logically unassailable argument, we may take as a given that one aim of science communication is to promote the reliable apprehension of the best available evidence by democratic institutions. This session and the next use the political conflict over climate change to motivate examination of this particular aim of science communication. This week we consider how the science of science communication has been used to understand the influences that have frustrated democratic convergence on the best available evidence on climate change. Next week we look at how the science of science communication has been used to try to formulate strategies for counteracting these influences.
The materials read this week can be understood to present evidence relevant to four hypothesized causes for conflict over climate change: (1) the public’s ignorance of the key scientific facts; (2) the public’s unfamiliarity with scientific consensus; (3) dynamics of risk perception that result in under-estimation of affectively remote (far off, boring, abstract) risks relative to ones that generate compelling, immediate apprehension of danger; and (4) motivated reasoning rooted in the tendency of people to form and persist in perceptions of risk that predominate within cultural or similar types of affinity groups.
The empirical support for these hypotheses ranges from "less than zero" to "respectable but incomplete." Trying to remedy this problem by combining the mechanisms they posit, however, is the least satisfying approach of all.
Attributing dissensus over climate change to the public’s “lack of knowledge” of the facts borders on tautology. But one way to treat this proposition as a causal claim rather than a definition is to examine whether changes in the level of public comprehension of the basic mechanisms of climate change are correlated with the level of public agreement that climate change is occurring.
By far the best (i.e., informative, scholarly) studies of “what the public knows” about climate change are two surveys performed Ann Bostrom and colleagues, the first in 1992 and the second in 2009. In the first, they found the public’s understanding to be riddled with “a variety of misunderstandings and confusions about the causes and mechanisms of climate change”—most notably that a depletion of the ozone layer was responsible for global warming.
Respondents in the follow-up survey did not score an “A,” either, but Bostrom et al. did find that the "2009 respondents were more familiar with a broader range of causes and potential effects of climate change.” In particular, they were more likely to appreciate what Bostrom et al. described as the “two facts essential to understanding the climate change issue”: that “an increase in the concentration of carbon dioxide in the earth’s atmosphere” is the “primary” cause of “global warming,” and that the “single most important source of carbon dioxide in the earth’s atmosphere is the combustion of fossil fuels.”
Nevertheless the 2009 respondents were not more likely than the 1992 respondents to believe that “anthropogenic climate change is occurring” or “likely” to occur. On the contrary, the proportion convinced that climate change was unlikely to occur was higher in 2009. These findings are in line, too, with the basic trends reported by professional polling firms, which have found that the overall proportion of the U.S. population that “believes” in climate change or views it as a serious risk has not changed in the last two decades.
It might seem puzzling that there could be an increase in the proportion of the population that reports being aware that rising atmospheric CO2 levels cause global warming without there being a corresponding increase in the proportion that perceive warming is occurring or likely to occur.
But in fact there’s a perfectly logical explanation: those who believe climate change is occurring (or will) were less likely in 2009 than in 1992 to neglect to attribute climate change to rising CO2 emissions-- along with various other things.
The only causal inference one could draw from these correlations would be that the “belief” that climate change is occurring motivates people to learn the “two facts essential to understanding the climate change issue”—not vice versa.
In fact, it is more plausible to think the correlations is spurious: that is, that there is some third influence that causes people both to believe in climate change and to know (or indicate in a survey) that the cause of climate change is the release of CO2 from consumption of fossil fuels.
The Bostrom et al. study supplies a pretty strong clue about what the third variable is. In both 1992 and 2009, respondents who indicated they believed climate change was occurring were more likely to misidentify as potential “causes” of it activities that harm the environment generally (e.g., “aerosol spray cans” and “toxic wastes”). They also were more likely to misidentify as effective climate change “abatement strategies” policies that are otherwise simply “good” for the environment (e.g., “converting to electric cars” and “recycling most consumer goods”).
This pattern suggests that what “caused” belief in climate change at both periods of time was a generic pro-environment sensibility, which also likely caused those who had it to “learn” that CO2 emissions from fossil fuels are also environmentally undesirable and therefore a cause of climate change. Bostrom et al. report regression analyses consistent with this interpretation.
This is really solid social science-- likely the best studies we've encountered in this course. But what surprises me a lot more than Bostrom et al.’s findings is that so many thoughtful people between 1992 and 2009 were willing to bet (and still are willing to bet) that conflict over climate change is attributable to lack of public understanding.
To be sure, it was obvious in 1992, and continues to be obvious today, that the public doesn’t have a good grasp of much of the basic science relating to climate science. But it seems pretty obvious that it doesn’t have a good grasp of the science relating to zillions of other issues—from pasteurization of milk to administering of dental x-rays—on which there isn’t any political conflict.
Basically, if one wants to know if x & y means x -> y, then instances of x & ~y count as disconfirming evidence. Here the instances of x & ~y (lack of public understanding of science, but absence of public conflict over science-informed policy) are sufficiently obvious that I would have guessed few people would expect "lack of knowledge" to explain public controversy over climate change.
People have to accept as known by science many more things than they could possibly understand—both as individuals making choices about how to live well and as citizens forming positions on the public good. They can pull that off without a problem for the most part because they are experts in figuring out who the experts are.
If they aren’t converging on the best evidence on climate change, then the problem is much more likely to be some influence that is interfering with their capacity to figure out who knows what about what than their inability to understand what experts know.
2. Public controversy -> Uncertainty over scientific consensus
That’s what makes it plausible to think that the public’s unfamiliarity with scientific consensus might be the real cause of the conflict. Of course, one difficulty with this view is that it, too, must negotiate a narrow passageway between tautology (the logical line between “disagreeing about climate science” and “disagreeing about what climate scientists know” is thin) and begging the question (if the public is unfamiliar consensus here but not elsewhere, what explains that?). I think the claim can’t squeeze through.
The public is divided over scientific consensus on climate change. But is that the cause of conflict over climate change or a consequence of it?
We read one excellent observational study (McCright, Dunlap & Xiao 2013), but simple correlations are inescapably inconclusive on this issue. Shifting variables from one side to the other of the equals sign can't break a tie between causal inferences of equal strength.
Experimental evidence is not entirely one-sided but in my view suggests that dissensus causes public uncertainty over scientific consensus rather than the other way around. Corner, Whimarsh & Xenias (2012), e.g., found (with UK subjects) that individuals display confirmation bias when assessing news reports asserting or disputing scientific consensus on climate change.
In another study, CCP researchers found that subjects highly likely to identify a particular scientist is an expert on climate change only when that scientist is depicted as reaching a conclusion that matches the one in the subjects’ cultural group. If this is how people in the world process information about what “experts” believe, then we can expect them to be culturally polarized on scientific consensus—as they in fact are.
3. Bounded rationality --"believing it when you feel it” or "feeling it when you believe it"?
The idea that the public is insufficiently concerned about climate change because it relies on heuristic-driven forms of reasoning (what Kahneman calls “system 1”) to assess risk is super familiar. But it is not supported by evidence. In fact, people who are most inclined to use conscious and deliberate (“system 2”) forms of reasoning are more concerned but rather more culturally polarized over climate change.
Was the “bounded rationality” account ever truly plausible? Sure!
But it was also subject to serious doubt right from the start because from very early on it was clear that the public was divided on climate change on ideological and cultural grounds. The bounded rationality story predicts that people in general will fail to worry as much as they should about a "remote, unfelt" risk like climate change -- not that egalitarian communitarians will react with intense alarm and hierarchical individualists with indifference.
From the beginning, commentators who have advanced the bounded-rationality conjecture have forecast that more people could be expected to “believe” in climate change once they started to “feel” it. This is actually a very odd claim. Once one reflects a bit, it should be clear that one can’t actually know that what one is feeling is climate change unless one already believes in it.
- Alice says she knows antibiotics can treat bacterial infections because she “felt better" after the doctor prescribed them for strep throat. Bob says he knows vitamin C cures a cold because he took some and “felt” better soon thereafter.
- Alice says that she has “seen with my own eyes” that cigarettes kill people: her great uncle smoked 5 packs a day and died of lung cancer. Bob reports that he has “seen” with his that vaccines cause autism: his niece was diagnosed as autistic after she got inoculated for whooping cough.
- Alice says that she “personally” has “felt” climate change happening: Sandy destroyed her home. Bob says that he “personally” has “felt” the wrath of God against the people of the US for allowing gay marriage: Sandy destroyed his home. (Cecilia, meanwhile, reports that her house was destroyed by Sandy, too, but she is just not sure whether climate change "caused" her misfortune.)
Bob’s inferences are as good as Alice’s--which is to say, neither of them is making good ones. Neither of them felt or otherwise experienced anything that enabled him or her to identify the cause of what he or she was observing. They had to believe on some other basis that the identified cause was responsible for what they were observing first or else they'd have no idea what was going on.
Maybe on some other basis—like a valid scientific study, say—Alice but not Bob, or vice versa, could be shown to have good grounds for crediting his or her respective attributions of causation. But then it would be the study, and not their or anyone else’s “feeling” of something that supplies those grounds.
Realize, too, that I'm not talking about what it would be rational for Bob or Alice to believe here. I'm talking about the basis for forming plausible hypotheses about the causes of their disagreement about climate change. Because they can't reliably "feel" the answer to the question whether human activity is causing rising sea levels, melting ice caps, increased extreme weather, etc., it is not particularly plausible to think that variance in their perceptions is what is causing them to disagree.
Not surprisingly, empirical studies do not support the “believe it when they feel it” corollary of the bounded rationality hypothesis. In one very good study, e.g., the researchers reported that people who lived in an area that had been palpably affected by climate change were as likely to say “no” or “unsure” as “yes” when asked whether they had “personally experienced” climate change impacts.
People might start in the near future to report that they are “feeling” climate change. But if so, that will be evidence that something other than their sense perceptions convinced them that they should identify climate change as the cause of what they are experiencing. If those who now “don’t believe” in climate change don’t change their minds, they’ll never “personally” experience or feel climate change, even if it kills them.
4. Motivated reasoning
There is strong evidence that culturally or ideologically motivated reasoning accounts for public controversy over climate change. As I’ve mentioned, cultural cognition, a species of motivated reasoning, has been shown to drive perceptions of scientific consensus and to be magnified by higher science literacy and a greater disposition to use system 2 reasoning.
It is true that people’s perceptions of whether it has been “hotter” or “colder” in their region strongly predicts whether they think climate change is occurring. But their perception of whether the temperatures have been above or below average is not predicted by whether it actually was hotter or colder in their locale. Instead it is predicted by their ideology and cultural worldviews.
The only thing unsatisfying about the motivated reasoning explanation is that it starts in medias res. One can observe the (disturbing, frightening) effects of motivated reasoning now; but what caused climate change risk perceptions, in particular, to become so vulnerable to this influence to begin with?
I’m not sure whether one needs to know the answer to that question in order to start to use the knowledge associated with such studies to design communication strategies that dissipate confusion and conflict over climate change. But I am sure that without a good answer, the risk that such conflicts will recur will be unacceptably high.
The worst of all explanations for political conflict over climate change is “all of the above.” The “phenomenon is complex; there’s lots going on!” etc.
I think people who make this sort of claim say it because they observe (a) that there are genuinely lots of plausible hypotheses for climate change conflict, (b) genuinely lots of confirming evidence for each of these theories, and (c) indisputably disconfirming evidence, too, for most (I'd be quite willing to believe all) of them. They take the conjunction of (b) and (c) as evidence of “multiple causes,” and “complexity.”
This would be fallacious reasoning, of course. One can nearly always find confirming evidence of any hypothesis; to figure out whether to credit the hypothesis, one has to construct & carry out a test that one has good reason to expect to generate disconfirming evidence in the event the hypothesis is false. Thus, the conjunction of (b) and (c) in regard to any particular plausible hypothesis is simply evidence that the hypothesis in question is false—not that “lots of things are going on.”
In fact, “all of the above” is worse than confused. When one adopts a "theory" that allows one to freely adjust multiple, offsetting mechanisms as necessary to fit observations, one can explain anything one sees. That’s not science; it's pseudoscience.
Commentators have expressed understandable confusion over point 5. To determine how much of the confusion is due to imperfect, confusing expression of the point and how much to imperfect, confused thought on my part, I offer the following addendum:
How to Learn About Climate-Change Risk Perceptions--or Anything Else
1. The (or at least 1) right way (RW)
(a) Form a hypothesis. (b) Construct & carry out an empirical test, the design of which will yield results that support an inference that the hypothesis is either more likely or less than one would have otherwise had reason to believe it to be. (c) Revise one’s assessment of the likelihood of the hypothesis in light of the results. (d) Repeat--forever & ever.
2. 2 (of the infinity of) wrong ways (WWs)
(a) Goldilocks. The signature of this WW is the coexistence of mechanisms that generate opposing hypotheses with respect to the phenomenon under study. A theory that posits multiple, opposing mechanisms will be consistent with any and every observation. Prescriptions associated with Goldilocks will always take the form of “do this rather than that up to the point at which that rather than this becomes the thing to do.” E.g., “use vivid, emotionally engaging imagery because abstract, dull information about risk is ignored . . . but don’t over-rely on vivid, emotional imagery because excessively alarming information triggers denial. . . .”
(b) Permanent complexity. This WW sees disconfirming results not as the occasion for revising downward the likelihood of a hypothesis but as confirming a metahypothesis that every true hypothesis has some limited domain. It says, “Fine, fine, your evidence just shows that y is part of what’s going on, but it’s not the only thing—the situation here is very, very complex—there’s a lot going on. . . .” It doesn’t look like Goldilocks at the start. But it inevitably begets it. Indeed, its mission is to become a Goldilocks Leviathan, consisting of the concatenation of an infinite variety of mechanisms each of which has been confirmed at least once and disconfirmed at least once by a test involving another mechanism. Inside the Leviathan there might be a mechanism that has never itself been refuted and that therefore would be judged the “most likely true” under RW. But it will be drowned in a sea of noise.
3. Complex hypotheses
RW doesn’t imply that there cannot be multiple, additive or (even more challengingly) interacting mechanisms. It just insists on a theoretical specification that (a) generates a hypothesis that (b) admits of being shown to be more or less likely conditional on empirical observation upon which (c) one is prepared to revise upward or downward one’s assessment of the likelihood of the hypothesis.