Yanking me from the jaws of entropy just before they snapped permanently shut on my understanding of the continuing empirical investigation of "consensus messaging," a friend directed my attention to a couple of cool recent studies I’d missed.
For the 2 members of this blog's list of 14 billion regular subscribers who don't know," consensus messaging” refers to a social-marketing device that involves telling people over & over & over that “97% of scientists” accept human-caused global warming. The proponents of this "strategy" believe that it's the public's unawareness of the existence of such consensus that accounts for persistent political polarization on this issue.
The first new study that critically examines this position is Cook, J. & Lewandowsky, S., Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks, Topics in Cognitive Science 8, 160-179 (2016).
Lewandowsky was one of the authors of an important early study (Lewandowsky, S., Gignac, G.E. & Vaughan, S, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change 3, 399-404 (2012)), that found that advising people that a “97% consensus” message increased their level of acceptance of human-caused climate change.
It was a very decent study, but relied on a convenience sample of Australians, the most skeptical members of which were already convinced that human activity was responsible for global warming.
Cook & Lewandowsky use representative samples of Australians and Americans. Because climate change is a culturally polarizing issue, their focus, appropriately, was on how consensus messaging affects individuals of opposing cultural predispositions toward global warming.
They report (p. 172) that “while consensus information partially neutralized worldview [effects] in Australia, in replication of Lewandowsky, Gignac, et al. (2013), it had a polarizing effect in the United States.”
“Consensus information,” they show, “activated further distrust of scientists among Americans with high free-market support” (p. 172).
There was a similar “worldview backfire effect” (p. 161) on the belief that global warming is happening and caused by humans among Americans with strong conservative (free-market) values,” although not among Australians (pp. 173-75).
D&S did two really cool things.
First, they did an experiment to assess how a large (N = 1300) sample of subjects responded to a “consensus” message.”
They found that exposure to such a message increased subjects’ estimate of the percentage of scientists who accept human-caused global warming.
However, they also found that [the vast majority of] subjects did not view the information as credible. [see follow up below]
“Almost two-thirds (65%) of the treated group did not think the information from the scientist survey was accurately representing the views of all scientists who were knowledgeable about climate change,” they report.
This finding matches one from a CCP/Annenberg Public Policy Center experiment, results of which I featured a while back, that shows that the willingness of individuals to believe "97% consensus" messages is highly correlated with their existing beliefs about climate change.
In addition, D&S find that relative to a control group, the message-exposed subjects did not increase their level of support for climate mitigation policies.
Innovatively, D&S measured this effect not only attitudinally, but behaviorally: subjects in the study were able to indicate whether they were willing to donate whatever money they were eligible to win in a lottery to an environmental group dedicated to “prevent[ing] the onset of climate change through promoting energy efficiency.”
Subjects exposed to the study’s consensus message were not significantly more likely—in a statistical or practical sense—to revise their support for mitigation policies, as measured by either the attitudinal or behavioral measures feature in the D&S design.
“This is consistent with a model where people look to climate scientists for objective scientific information but not public policy recommendations, which also require economic (i.e. cost-benefit) and ethical considerations,” D&S report (p. 7).
Second, D&S did a follow-up survey, in this part of the study, they re-surveyed subjects who received a consensus message to the consensus message six-months after the initial message exposure.
Still no impact on the willingness of message exposed subjects to support mitigation policies (indeed, all the results were negative, Tbl. 7,albeit “ns”).
In addition, whereas immediately after message exposure, subjects had reported higher responses on 0-100 measures of their perceptions of the likelihood of temperature increases by 2050, D&S report that they “no longer f[ound] a significant effect of information”—at least for the most part.
Actually, there was significant increase in responses to items soliciting belief that temperatures would increase by more than 2.5 degrees Celsius by that time -- and that they would decrease by that amount.
D&S state they are “unable to make definitive conclusions about the long-run persistence of informational effects” (p. 12). But to the extent that there weren’t any “immediate” ones on support for mitigation policies, I’d say that the absence of any in the six-month follow up as well rules out the possibility that the effect of the message just sort of percolates in subjects' psyches, blossoming at some point down the road into full-blown support for aggressive policy actions on climate change.
In my view, none of this implies that nothing can be done to promote support for collective action on climate change. Only that one has to do something other-- something much more meaningful-- than march around incanting "97% of scienitists!"
But the point is, these are really nice studies, with commendably clear and complete reporting of their results. The scholars who carried them out offer their own interpretations of their data-- as they should-- but demonstrate genuine commitment to making it possible for readers to see their data and draw their own inferences. (One can download the D&S data, too, since they followed PLOS ONE policy to make them available upon publication.)
Do these studies supply what is now the “strongest evidence to date” on the impact of consensus-messaging?
Sure, I’d say so-- although in fact I think there's nothing in the previous "strongest evidence to date" that would have made these findings at all unexpected.
What do you think?
I've "updated" my understanding of Deryugina & Shuchkov-- based on what it actually says & not what I (embarrassingly) thought it did when I read it less carefully than I now have!
Unlike Van der Linden et al (2015), D&S didn't ask their subjects to "estimate" the percentage of climate-scientists who believe in human-caused limate change immediately after telling that the answer is X% (94% in D&S case, 97% in case of Van der Linden et al.).
Who knows--maybe they figured that the responses one extracts by these means add nothing valid to the experiment given the obvious demand-effect confound. So one should simply measure what the impact of measure exposure is -- as opposed to the subjects' resulting estimates of the percentage of climate scientists who believe in human-caused climate change-- on subjects' own beliefs & attitudes on climate change (cf. Lewandowsky, Gignac & Vaughn 2012; Cook & Lewandowsky 2016). But one would have to hear the explanation from D&S to know for sure!
I've amended/emended the original blog entry accordingly.
Also, here is a "revised" explanation of what D&S found, & C&L too, from something I'm working on:
The results of studies that examine the impact of “consensus messaging” are mixed. In an important early study, Lewandowsky, Gignac & Vaughan (2012) reported that members of an Australian convenience sample were more likely to accept that human activity is causing climate change after being exposed to “97% consensus” message. But when Cook and Lewandowsky (2016) conducted a similar cross-cultural study they found that “consensus information” had a “worldview backfire effect” among U.S. study subjects: individuals “with strong conservative (free-market) values,” they reported, expressed greater “distrust of scientists” and reduced willingness to accept human-caused climate-change after being showing a consensus message (pp. 172, 175).
In a recent large-sample study (N = 1300), Deryugina & Shurchkov (2016) found that immediately after being exposed to “consensus messaging,” U.S. study subjects revised upward their assessment of the probability that human activity was causing climate change. Those same subjects, however, did not evince increased support for climate-change mitigation policies. In a follow up survey six months later (N = 747), those subjects still did had not changed their willingness to support mitigation policies. In addition, their assessment of the probability that human activity is causing climate change no longer differed significantly from their pre-message assessment.