follow CCP

Recent blog entries
Wednesday
Apr022014

MAPKIA! Episode 49: Where is Ludwick?! Or what *type* of person is worried about climate change but not about nuclear power or GM foods?

Time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

By now all 14 billion regular readers of this blog can recite the rules of "MAPKIA!" by heart, but here they are for new subscribers (welcome, btw!):

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will be posted "tomorrow."  The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.) 

Okay—we have a real treat for everybody: a really really really fun and really really really hard "MAPKIA!" challenge (much harder than the last one)!

The idea for it came from the convergence of a few seemingly unrelated influences.

One was an exchange I had with some curious folks about the relationship between perceptions of the risks of climate change, nuclear power, & GM foods.

Actually, that exchange already generated one post, in which I presented evidence (for about the umpteenth time) that GM food risks perceptions are not politically or culturally polarized in the U.S., and indeed, not even part of the same “risk perception family” (that was the new part of that post) as climate and nuclear.  

Responding to this person’s (reasonable & common, although in fact incorrect) surmise that GM food risk perceptions cohere with climate and nuclear ones, I had replied that it would be more interesting to see if it were possible to “profile” individuals who are simultaneously (a) climate-change risk sensitive, and (b) nuclear-risk and (c) GM food risk skeptical.

Right away, Rachel Ludwick (aka @r3431) said, “That would be me.”

So I’m going to call this combination of risk perceptions the “Ludwick” profile.

Why should we be intrigued by a Ludwick?

Well, anyone who is simultaneously (a) and (b) is already unusual. That’s because climate change risks and nuclear ones do tend to cohere, and signify membership in one or another cultural group.

In addition, the co-occurrence of those positions with (c)—GM food risk skepticism—strikes me as indicating a fairly discerning and reflective orientation toward scientific evidence on risk.

Indeed, one doesn’t usually see discerning, reflective orientations that go against the grain, culturally speaking.

On the contrary, higher degrees of reflection—as featured in various critical reasoning measures—usually are associated with even greater cultural coherence in perceptions of politically contested risks and hence with even greater political polarization.

A Ludwick seems to be thoughtfully ordering a la carte in a world in which most people (including the most intelligent ones) are consistently making the same selection from the prix fixe menu.

That is the second thing that made me think this would be an interesting challenge.  I am interested in (obsessed with) trying to identify dispositional indicators that suggest a person is likely to be a reflective cultural nonconformist.

Unreflective nonconformits aren’t hard to find. Indeed, being nonconformist is associated with being bumbling and clueless.

As I’ve explained 43 times before, it’s rational for people to fit their perceptions of risk to their cultural commitments, since their stake in fitting in with their group tends to dominate their stake in forming “correct” perceptions of societal risk on matters like climate change, where one’s personal views have no material effect on anyone’s exposure to the risk in question.

Accordingly, failing to display this pattern of information processing could be a sign that one is socially inept or obtuse.  That’s one way to explain why people who are low in critical reasoning capacities tend to be the ones most likely to form group-nonconvergent beliefs on culturally contested risks (although even for them, the “nonconformity effect” isn’t large).

It would be more interesting, then, to find a set of characteristics indicates a reflective disposition to form truth-convergent (or best-evidence convergent) rather than group-convergent perceptions of such risks.  I haven’t found any yet. On the contrary, the most reflective people tend to conform more, as one would expect if indeed this form of information processing rationally advances their personal interests.

As I said, thought, the Ludwick combination of risk perceptions strikes me as evincing reflection.  Because it is also non-conformist with respect to at least two of its elements (climate-risk concerned, nuclear-risk skeptical),  being able to identify Ludwicks might lead to discovery of the elusive “reflective non-conformity profile”!

The last thing that influenced me to propose this challenge is another project I’ve been working on. It involves using latent risk dispositions to predict individual perceptions of risk.  The various statistical techniques one can use for such a purpose furnish useful tools for identifying the Ludwick profile.

So everybody, here’s the MAPKIA:

What “risk profiling” (i.e., latent disposition) model would enable someone to accurately categorize individuals drawn from the general population as holding or not holding the Ludwick combination of risk preferences?

Let me furnish a little guidance on what a “successful” entry in this contest would have to look like and the criteria one (that one being me, in particular) might use to assess the same.

To begin with, realize that a Ludwick is extremely rare.  

For purposes of illustration, here’s a scatter plot of the participants in an N = 2000 nationally representative survey arrayed with respect to their global warming and nuclear power risk perceptions, indicated by their responses to the “industrial strength risk perception measure” (ISRPM).

click me if you want to see the scatterplot w/o the text & arrows & such!I’ve color coded the respondents with respect to their GM food risk perceptions, measured the same way: blue for “skeptical” (≤ 2), mud brown for “neutral” (3-5), and red for “sensitive” (≥ 6).

So where is @r3431, aka “Rachel Ludwick”?!

Presumably, she’s one of the blue observations within the dotted circle.

The circle marks the zone for “climate change risk sensitive” and “nuclear risk skeptical,” a space we’ll call the “Ropeik region.”

A “Ropeik,” who will be investigated in a future post, is a type who is very worried about climate change but regards the water used to cool nuclear reactor rods as a refreshing post-exercise drink.  The Ropeik region is very thinly populated--not necessarily on account of radiation sickness but rather on account of the positive correlation (r = 0.47, p < 0.01) between global warming concerns and nuclear power ones.

The correlation  between worrying about global warming & worrying about GM foods is quite modest (r = 0.26, p < 0.01) .

But there definitely is one.

Accordingly, someone who is GM food risk skeptical is even less likely to be in the Ropeik region (where people are very concerned about climate change) than somewhere else.

Those are the Ludwicks.  They exist, certainly, but they are uncommon.

Actually, if we define them as I have here in relation to the scores on the relevant ISRPMs, they make up about 3% of the population.

Maybe that is too narrow a specification of a Ludwick? 

For sure, I’ll accept broader specifications in evaluating "MAPKIA!" entries—but only from entrants who offer good accounts, connected to cogent theories of who these  Ludwicks are, for changing the relevant parameters.

Of course, such entrants, to be eligible to win the great prize (either this or something like it) to be awarded to the winner of this "MAPKIA!" would also need to supply corresponding “profiling” models that “accurately categorize” Ludwicks.

What do I have in mind by that?

Well, I’ll show you an example.

I start with a “theory” about “who fears global warming, who doesn’t, and why.”  Based on the cultural theory of risk, that theory posits that people with egalitarian and communitarian outlooks will be more predisposed to credit evidence of climate change, and people—particularly white males—with hierarchical and individualistic outlooks more predisposed to dismiss it. 

Because these predispositions reflect the rational processing of information in relation to the stake such individuals have in protecting their status within their cultural groups, my theory also posits that the influence of these predispositions will be increase as individuals become more “science comprehending”—that is, more capable of making sense of empirical evidence and thus acquiring scientific knowledge generally.

A linear regression model specified to reflect that theory explains over 60% of the variance in scores on the global warming ISRPM.

I can then use the same variables—the same model—in a logistic regression to predict the probability that someone is a “climate change believer” (global warming ISRPM  ≥ 6) and the probability someone is a “climate change skeptic” (global warming ISRPM  ≤ 2).

(Someone who read this essay before I posted it asked me a good question: what’s the difference between this classification strategy and the one reflected in the  popular and very interesting “6 Americas” framework? The answer is that the “6 Americas scheme” doesn't predict who is skeptical, concerned, etc. Rather, it simply classifies people on the basis of what they say they believe about climate change. A latent-disposition model, in contrast, classifies people based on some independent basis like cultural identity that makes it possible to predict which global warming "America" members of the general population live in without having to ask them.)

Classifying someone as one or the other so long as he or she had a predicted probability > 0.5 of having the indicated risk perception, the model would enable me to determine whether someone drawn from the general population is either a "skeptic" or a "believer" (your choice!) with a success rate of around 86% for “skeptics” and 80% for “believers.” 

How good is that?

Well, one way to answer that question is to see how much better I do with the model than I’d do if the only information I had was the population frequency of skeptics and believers.

“Skeptics” (ISRPM ≤ 2) make up 26% of my general population sample. Accordingly, if I were to just assume that people selected randomly from the population were not “skeptics” I’d be “predicting” correctly 74% of the time.

With the model, I’m up to 86%--which means I’m predicting correctly in about 46% of the cases in which I would have gotten the answer wrong by just assuming everyone was a nonskeptic.

“Believers” (global warming ISRPM ≥ 6) make up 35% of the sample.  Because I can improve my “prediction” proficiency relative to just assuming everyone is a nonbeliever from 65% to 80%, the model is getting the right answer in 42% of the cases in which I’d have had gotten the wrong one if the only guide I had was the “believer” population frequency.

Those measures—46% and 42%--reflect the “adjusted count R2” measure of the “fit” of my classification model.

There are other interesting ways to assess the predictive performance of these models, too—and likely I’ll say more about that “tomorrow.”

But “how good” a predictive model is is a question that can be answered only with reference to the goals of the person who wants to use it.  If it improves her ability relative to “chance,” does it improve it enough, & in the way one careas about (reducing false positives vs. reducing false negatives),  to make using it worth her while?

But for now, consider GM food risk perceptions.

As I’ve explained a billion times, one won’t do a very good job “profiling” someone who is GM food risk sensitive or GM food risk-skeptical by assimilating GM food risks to the “climate change risk family.”

If I use the same latent predisposition model for GM food risk perceptions that I just applied for global warming risk perceptions, I explain only 10% of the variance in the GM food ISRPM (as opposed to over 60% for global warming ISRPM).

When I try to predict GM food risk “skeptics” (ISRPM ≤ 2) and GM food risk “believers” (ISRPM ≥ 6), I end up with correct-classification rates of 79% and 71%, respectively.

That might sound good—but it isn’t.

In fact, that sort of “predictive proficiency” sucks. 

GM food “skeptics” make up 22% of the population—meaning that 78% of people are not skeptical.  My 79% predictive accuracy rate has an adjusted count R2 of 0.03, and is likely to be regarded as pitiful by anyone who wants to do anything, or at least anyone who wants to do something besides publish a paper with “statistically significant” regression coefficients (I've got a bunchin my GM food "skeptic" model--BFD!), on the basis of which he or she misleadingly claims to be able to “explain” or “predict” who is a GM food risk skeptic!

For GM food “believers,” my 71% predictive accuracy compares with a 70% population frequency (30% of the sample are “believers,” defined as ISRPM ≥ 6).  An adjusted count R2 of 0.02: Woo hoo!  (Note again that my model has a big pile of “statistically significant” predictors—the problem is that the variables are predicting variance based on combinations of characteristics that don’t exist among real people).

In sum, we need a different theory, and a different model, of who fears what & why to explain GM food risk perceptions.

I don’t have a particularly good theory at this point.

But I do have a pile of hunches.

They are ones I can test, too, with potential indicators that I’ve featured in previous posts. These include

In constructing their Ludwick models, "MAPKIA!" entrants might want to consult those posts, too.

I’ll say more how I would use them to predict GM food risks “tomorrow,” when I post (or post the first) report on the MAPKIA entries.

So …on you marks… get set …

MAPKIA!

 


Friday
Mar282014

I ♥ NCAR/UCAR--because they *genuinely* ♥ communicating science (plus lecture slides & video)

Spent a great couple of days at NCAR/UCAR last week, culminating in a lecture on "Communicating Climate Science in a Polluted Science Communication Environment."

Slides here. Also, an amusing video of the talk here—one that consists almost entirely of forlorn-looking lectern.

There are 10^6 great things about NCAR/UCAR, of course.

But the one that really grabbed my attention on this visit is how much the scientists there are committed to the instrinsic value of communicating science. 

They want people —decisionmakers, citizens, curious people, kids (dogs & cats, even; they are definitely a bit crazy!)—to know what they know, to see what they see, because they recognize the unique thrill that comes from contemplating what human beings, employing science’s signature methods of observation and inference, have been able to discern about the hidden workings of nature.

Yes, making use of what science knows is useful—indeed, essential—for individual & collective well-being.

That’s a very good reason, too, to want to communicate science under circumstances in which one has good justification (i.e., a theory consistent with plausible behavioral mechanisms and supported by evidence) to believe that not knowing what’s known is causing people to make bad decisions.

But if you think that “knowing what’s known” is how people manage to align their decisionmaking with the best available evidence in all the domains in which their well-being depends on that; that their “not knowing” is thus the explanation for persistent states of public conflict over the best evidence on matters like climate change or nuclear power or the HPV vaccine; and that communicating what’s known to science is thus the most effective way to dispel such disputes, then you actually have a very very weak grasp of the science of science communication.

And if you think, too, that what I just wrote implies there is “no point” in enabling people to know, then you have just revealed that you are merely posing—to others, & likely even to yourself!—when you claim to care about science communication and science education.

I spent hours exchanging ideas with NCAR scientists--including ideas about how to use empirical evidence to perfect climate-science communication--and not even for one second did I feel I was talking to someone like that.

 

 

 

Thursday
Mar272014

The sources of evidence-free science communication practices--a fragment...

From something I'm working on...

Problem statement. Our motivating premise is that advancement of enlightened conservation policymaking  depends on addressing the science communication problem. That problem consists in the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent public conflict over policy-relevant facts to which that evidence directly speaks. As spectacular and admittedly consequential as instances of this problem are, states of entrenched public confusion about decision-relevant science are in fact quite rare. They are not a consequence of constraints on public science comprehension, a creeping “anti-science” sensibility in U.S. society, or the sinister acumen of professional misinformers.  Rather they are the predictable result of a societal failure to integrate two bodies of scientific knowledge: that relating to the effective management of collective resources; and that relating to the effective management of the processes by which ordinary citizens reliably come to know what is known (Kahan 2010, 2012, 2013).

The study of public risk perception and risk communication dates back to the mid-1970s, when Paul Slovic, Sarah Lichtenstein, Daniel Kahneman, Amos Tversky, and Baruch Fischhoff began to apply the methods of cognitive psychology to investigate conflicts between lay and expert opinion on the safety of nuclear power generation and various other hazards (e.g., Slovic, Fischhoff & Lichtenstein 1977, 1979; Kahneman, Slovic & Tversky 1982).  In the decades since, these scholars and others building on their research have constructed a vast and integrated system of insights into the mechanisms by which ordinary individuals form their understandings of risk and related facts. This body of knowledge details not merely the vulnerability of human reason to recurring biases, but also the numerous and robust processes that ordinarily steer individuals away from such hazards, the identifiable and recurring influences that can disrupt these processes, and the means by which risk-communication professionals (from public health administrators to public interest groups, from conflict mediators to government regulators) can anticipate and avoid such threats and attack and dissipate them when such preemptive strategies fail (e.g., Fischhoff & Scheufele 2013; Slovic 2010, 2000; Pidgeon, Kasperson & Slovic 2003; Gregory, McDaniels & Field 2001; Gregory & Wellman 2001).

Astonishingly, however, the practice of science and science-informed policymaking has remained largely innocent of this work.  The persistently uneven success of resource-conservation stakeholder proceedings, the sluggish response of local and national governments to the challenges posed by climate-change, and the continuing emergence of new public controversies such as the one over fracking—all are testaments (as are myriad comparable misadventures in the domain of public health) to the persistent failure of government institutions, NGOs, and professional associations to incorporate the science of science communication into their efforts to promote constructive public engagement with the best available evidence on risk.

This disconnection can be attributed to two primary sources.  The first is cultural: the actors most responsible for promoting public acceptance of evidence-based conservation policymaking do not possess a mature comprehension of the necessity of evidence-based practices in their own work.  For many years, the work of conservation policymakers, analysts, and advocates has been distorted by the more general societal misconception that scientific truth is “manifest”—that because science treats empirical observation as the sole valid criterion for ascertaining truth, the truth (or validity) of insights gleaned by scientific methods is readily observable to all, making it unnecessary to acquire and use empirical methods to promote its public comprehension (Popper 1968).

Dispelled to some extent by the shock of persistent public conflict over climate change, this fallacy has now given way to a stubborn misapprehension about what it means for science communication to be truly evidence based.  In investigating the dynamics of public risk perception, the decision sciences have amassed a deep inventory of highly diverse mechanisms (“availability cascades,” “probability neglect,” “framing effects,” “fast/slow information processing,” etc.). Used as expositional templates, any reasonably thoughtful person can construct a plausible-sounding “scientific” account of the challenges that constrain the communication of decision-relevant science (e.g., XXXX 2007, 2006, 2005). But because more surmises about the science communication problem are plausible than are true, this form of story-telling cannot produce insight into its causes and cures. Only gathering and testing empirical evidence can.

Sadly, some empirical researchers have contributed to the failure of practical communicators to appreciate this point. These scholars purport to treat general opinion surveys and highly stylized lab experiments as sources of concrete guidance for actors involved in communicating science relevant to risk-regulation or related policy issues (e.g., XXX 2009). Such methods have yielded indispensable insight into general mechanisms of consequence to science communication. But they do not—because they cannot—furnish insight into how to engage these mechanisms in particular settings in which science must be communicated.  The number of plausible surmises about how to reproduce in the field results that have been observed in the lab likewise exceeds the number that are true. Again,Paul Slovic & Sarah Lichtenstein, risk perception field research, Las Vegas, 1969! empirical observation and testing are necessary—in the field, for this purpose.  The number of researchers willing to engage in field-centered research, and unwilling to acknowledge candidly the necessity of doing so, has stifled the emergence of a genuinely evidence-based approach to the promotion of public engagement with decision-relevant science (Kahan 2014).

The second source of the disconnect between the practice of science and science-informed policymaking, on the one hand, and the science of science communication, on the other, is practical: the integration of the two is constrained by a collective action problem.  The generation of information relevant to the effective communication of decision-relevant science—including not only empirical evidence of what works and what does not but also practical knowledge of the processes for adapting and extending it in particular circumstances—is a public good.  Its benefits are not confined to those who invest the time and resources to produce it but extend as well to any who thereafter have access to it.  Under these circumstances, it is predictable that producers, constrained by their own limited resources and attentive only to their own particular needs, will not invest as much in producing such information, and in a form amenable to the dissemination and exploitation of it by others, as would be socially desirable.  As a result, instead of progressively building on their successive efforts, each initiative that makes use of evidence-based methods to promote effective public engagement with conservation-relevant science will be constrained to struggle anew with the recurring problems.

This proposal would attack both of sources of the persistent inattention to the science of science communication....

References

Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).

Gregory, R. & McDaniels, T. Improving Environmental Decision Processes. in Decision making for the environment : social and behavioral science research priorities (ed. G.D. Brewer & P.C. Stern) 175-199 (National Academies Press, Washington, DC, 2005).

Gregory, R., McDaniels, T. & Fields, D. Decision aiding, not dispute resolution: Creating insights through structured environmental decisions. Journal of Policy Analysis and Management 20, 415-432 (2001).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change: How Information Shapes Our Common Future, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).

Pidgeon, N.F., Kasperson, R.E. & Slovic, P. The social amplification of risk (Cambridge University Press, Cambridge ; New York, 2003).

Popper, K.R. Conjectures and refutations : the growth of scientific knowledge (Harper & Row, New York, 1968).

Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).

Slovic, P. The perception of risk (Earthscan Publications, London ; Sterling, VA, 2000).

Slovic, P., Fischhoff, B. & Lichtenstein, S. Behavioral decision theory. Annu Rev Psychol 28, 1-39 (1977).

Slovic, P., Fischhoff, B. & Lichtenstein, S. Rating the risks. Environment: Science and Policy for Sustainable Development 21, 14-39 (1979).

Monday
Mar172014

Science comprehension ("OSI") is a culturally random variable -- and don't let anyone experiencing motivated reasoning tell you otherwise!

Here I've simply plotted "science comprehension" -score histograms for the four segments of a general population sample whose members have been divided in relation to their scores on the means of the "hierarchy-egalitarian" & "individualism-communitarianism" cultural worldview scales.

I suppose the figure could itself be used to measure motivated reasoning: If you perceive that one of these groups varies meaningfully in the disposition or apptitude that this particular scale measures, you might well be experiencing it!

But that won't make you any different from anyone else.  Rather than being embarrassed,  if you manage to catch yourself displaying this tendency, then you should be proud of yourself, for you'll be demonstrating a very unusual form of self-reflection--one much rarer than a "high" level of science comprehension.

The experience of catching yourself in this way will also likely fill you with apprehension over the number of times that you've no doubt experienced this pattern of thinking and did not catch yourself. Cultivating that sort of anxiety can't hurt either if you are trying to sharpen your powers or self-reflection -- or just trying to avoid becoming a boorish cultural sectarian whose interest in promoting public engagement with science is just a mask you don as you gear up for illiberal forms of status competition... 

BTW, this figure features the same "ordinary science intelligence" measure (I prefer that phrasing to "science literacy," which to me connotes an inventory of substantive bits of knowledge divorced from comprehension of & facility with the form of inferential reasoning needed to recognize valid science) that I've been futzing with for a while (despite its propensity to lead me into Alice-in-Wonderland style misadventures).

It combines the 11-item NSF indicator battery plus a 10-item "long cognitive reflection test."  It has the qualities that one would expect in/demand of a valid science comprehension measure, & has been productive of some pretty interesting insights into when people who have opposing cultural identities but who share a demonstrable proficiency in critical reasoning are more likely to converge or instead more likely to disagree than are less "science comprehending" members of their groups about a fact that admits of scientific investigation (e.g., the natural history of human beings or the reality and causes of climate change or GM foods or fracking or childhood vaccines). 

Maybe I'll write more "tomorrow" about the interesting psychometric properties of this OSI measure....

 

Thursday
Mar132014

Fracking freaks me out

So I said in my post “yesterday” that I’d share a “freakout” experience I had where data didn’t seem to fit my expectations in an area in  which I like to think I’m at least moderately well informed!

It occurred when I made a 3-day visit to the Ohio State University last week.

I had a great time!

I got to learn about the awesome convergence of interest across programs there in the science of science communication, reflected in the new initiative on Behavioral Decision Making.

I got to have lots of great conversations with amazing scholars, including (but not limited to!) my collaborator Ellen Peters, Hal Arkes, and Erik Nisbet.

And I got to do a workshop on “Motivated System 2 Reasoning,” in which I got a ton of good feedback from an audience that was as diverse in their backgrounds and perspectives as they were enthusiastic to engage (slides here).

Now, the freakout part occurred in connection with my participation in panel on fracking.

The panel was a “town meeting”-style event produced as part of the University’s “Health Science Frontiers” series. In the series, public-health and science issues are introduced by a panel discussion and then opened up for a broader discussion with audience members, all of whom are assembled in the studio of the University’s public television affiliate, which records the event for later broadcast.

Besides me, the panel for the fracking show included Mike Bisesi, a super-smart OSU environmental scientist, and Mark Somerson, a reporter for the Columbus Dispatch who has been doing very extensive, fine-grained coverage of public controversy over the expansion of fracking in Ohio.  The moderator, who displayed amazing craft!, was Mike Thompson, WOSU’s news director.

Not really sure what I could add to the discussion, I figured I’d at least be sure to make the point that most members of the general public don’t know what fracking is.

I mean this quite literally. 

A George Mason/Yale Climate Change Communication Project study found that 55% of the respondents in a nationally representative poll reported having heard “nothing” (39%) or only “a little” (16%) about fracking, and only 31% reported knowing either a “little” (22%) or a “lot” (9%).

These sorts of self-report measures, moreover, are known to overstate what people actually know about an emerging technology.

In a recent Pew survey, 51% were able to select the right answer to the question “what natural resource is extracted in a process known as ‘fracking’ ”—in a multiple-choice question in which the likelihood of getting the right answer by simply choosing randomly would have been 25%.  We can infer the percent who actually knew the answer, then, was lower than 50%--& surely no more than 46% (assuming, over generously, odds of 9:1 that any respondent who got the right answer knew rather than “guessed”).

The lack of familiarity with an emerging technology is a good thing to keep in mind when a group of people who are well-informed about and highly interested in a novel technology get together to talk about (among other things) “public attitudes” towards it.

Precisely because those people are well-informed and highly-interested, they will have been exposed to a very unrepresentative sample of opinions toward the technology, and are vulnerable for that reason to grossly overestimate the extent to which the risks it poses are a genuine matter of public dispute. 

This effect, moreover, will be magnified if those people, disregarding the biased nature of the samples that are the basis of their own observations, talk a lot to themselves and credit one another’s reports about who believes what and why about what is in fact a boutiquey issue in which most ordinary people don’t have views one way or the other.

This was one of the point’s I stressed in “yesterday’s” post, which noted the echo-chamber amplification of misimpressions about the extent and partisan nature of conflict over GM foods.  People who know a lot about it—particularly ones who write about it for the media and on-line—take it as gospel that the public is “polarized,” when in fact they just aren’t.

Why would they be? Most of them have no idea what GM foods are either (not to mention that they are consuming platefulls of them at pretty much every meal).

I anticipated that people attending the fracking session would likely be under the impression that “fracking” is a controversial issue that has the public up in arms, and I’d be able to say, “well, wait a second . . . .” In fact, I wasn’t really sure I’d have anything more to say!

Okay.

So we arrive at the studio for the event and tell the receptionist we are here for the “fracking panel.” 

“Fracking?,” she says. “What’s that?”

“Score!,” I think to myself. This exchange will make for a nice, concrete illustration of my (solitary) point.

At this stage, Eric Nisbet, whom I had arrived with answered, “It’s a technique by which high pressure water mixed with various chemicals is used to fracture underground rock formations so that natural gas can be extracted.”

“Oh my god!,” the receptionist exclaimed. “That’s sounds terrifying! The chemicals—they’ll likely poison us. And surely there will be earthquakes!”

Seriously. 

And shit, I thought, now what am I going to say?

Actually, the receptionist’s response made things even better!

Because it turns out that even though people don’t know anything about fracking, there is reason to think that they -- or really about 50% of them-- will react the way she did as soon as they do.

That’s what is freaking me out!

Consider this snapshot of public opinion on climate change:

This is the “profile” of a “stage 3” science-communication pathology.

Not only is there intense political polarization (not just on “how serious” the risk of climate change is, btw, but also on more specific empirical issues like “whether the earth has been heating up” and “whether humans have caused it”; responses to the industrial strength risk perception measure will correlate very highly with responses to those more specific, “factual” issues).

The polarization is even more intense among individuals who know the most about science generally and who possess the aptitudes and critical reasoning skills most suited to understanding scientific evidence.

The reason “science comprehension” magnifies polarization, CCP research suggests, is that individuals of opposing cultural identities (ones you can often measure adequately with right-left political outlooks but can get an even more discerning glimpse of with the two-dimensional cultural worldview scales) are using their knowledge and reasoning proficiencies to fit all the evidence they see to the position that predominates in their group.

We see this not just on climate change, of course, but on other culturally contested issues like nuclear power and gun control.

But we don’t see it very often.  Indeed, the number of facts that are important for individual and collective decisionmaking that reflect this pattern is miniscule relative to the ones that don’t.

Consider medical x-rays and fluoridation of water:

 

No polarization, and as diverse individuals become more science comprehending, they tend to converge on the position that is most supported by the best (currently) available evidence.

And I could go on all day showing you graphics that look exactly like that! That pattern is the normal situation, the existence of which tends, for reasons similar to ones I’ve discussed already, tends to evade our notice & result in wild overestimations of the degree to which there is conflict over science in our society.

In fact, consider GM foods:

Despite what people think, there's no polarization to speak of here. It’s true, science comprehension seems to have a bigger effect in reducing risk perception among those who are more right-wing than it does on those who are more left- in their political outlooks.  But since the effect on both is to reduce concern, it’s hard to believe that that sort of difference portends political conflict over whether GM foods are risky.

The perception that these issues are part of the cluster of politically or socially controversial set of risk issues in our society is a consequence of the selection bias and echo chamber effects I also discussed above.

I’ve talked about these things before (and talked before about how it seems like everything I ever talk about is something I’ve already talked about).  And when I talk about GM foods, I usually add, “Of course, there isn’t political polarization over GM foods—most people don’t even know what they are!!”

But now consider fracking . . . :

WTF?!!!!!

This is a “stage 3” pathology picture!  

How could this be? After all, polarization that increases conditional on science comprehension is not the norm! And most people haven’t even heard of this friggin’ fracking thing!

I know, I know: many of you will say, “of course, the answer is blah blah blah”—an answer that will in fact be perfectly plausible.  But if that’s your instinct, you should teach yourself to stifle it. 

Everything is obvious once you know the answer!”  Before you knew it, moreover, the opposite was just as plausible.  If I’d shown you that fracking looked like medical x-rays or vaccines or GM foods, you would have said, “Of course—polarization that increases conditional on science comprehension is unusual, and no one even knows about GM foods, blah blah blah….”

More things are plausible than are true!

That’s why we look at evidence.

It’s why, too, it’s no embarrassment to learn that one’s plausible conjecture about a phenomenon is wrong! 

The only thing that would be embarrassing—and just plain wrong—would be the failure not to adjust one’s previous, plausible views in light of what new and surprising information shows.

So what’s going on?

I can only conjecture—in anticipation of yet more study. But here’s what I’d say.

In measuring subjects’ perceptions with the “industrial strength measure,” I defined fracking, parenthetically, in terms very much like the ones that Eric used to describe it to the receptionist.

As was the case for her, that was enough for the participants in the study to experience the sort of affective reaction to this technology that assimilated it to the putative risk sources--like climate change, and guns, and nuclear power—with which they are more familiar and on which they are already strongly divided along cultural lines.

The experience was even more intense among those highest in critical reasoning dispositions. But that makes sense to—for contrary to the dominant “instant decision science” (take 2 cups of “heuristics & biases” literature, add water & stir”) story-telling account of polarization over decision-relevant science, that phenomenon is not a consequence of overreliance on “System 1” heuristic reasoning.  Rather it is a form of information processing that rationally serves individuals’ interest in forming and persisting in perceptions of risk that express their stake in maintaining their status in affinity groups essential to their identities.

We might well infer from these data, then, that there is something pretty peculiar about fracking that makes it distinctively vulnerable—much more so, even, than GM foods, which after all have been around for decades and which advocacy groups incessantly try to transform into a culturally polarizing issue—to the pathology that characterizes climate change and other issues that display the “stage 3” pattern.

Indeed, one of the points of developing a science of science communication—one that tests conjectures as opposed to engaging in “instant decision science” —is to create forecasting tools that can spot public-deliberation disasters like the one over climate change or the HPV vaccine in advance and head them off.

But in that regard, we also shouldn’t assume that every novel technology that has this sort of special incitement quality will in fact become the an object of reason-distorting cultural status competition.

Nanotechnology, for example, displayed a similar sensitivity in a CCP experiment a few years ago, and now I’m pretty much convinced that that issue is a dud.

So – I don’t know!

But I want to: I want to know more about fracking, and about the mechanisms and processes that comprise our science communication environment.

So I'll collect even more data.

And expect --indeed, eagerly and excitedly embrace--even more surprise.

 

 

Monday
Mar102014

Who fears what & why? Trust but verify!

Patrick Moore, aka "@EcoSenseNow," posed this question to me:

 

Probably Patrick & a friend were involved in a discussion about whether those who are (aren't) concerned about climate change are the "same" people who are (aren't) concerned about nuclear power and GM food risks.

A discussion/argument like that is pretty interesting, if you think about it.

We all know that risk perceptions tend to come in intriguing packages -- intriguing b/c the correlations between the factual understandings they comprise are more plausibly explained by the common cultural meanings they express than by any empirical premises they share. 

E.g., imagine you were to say to me, "Gee, I wonder whether crime rates can be expected to up or instead to go down if one of the 40 or so states that now automatically issue a permit to carry a concealed handgun to any adult w/o a criminal record or a history of mental illness enacted a ban on venturing out of the house with a loaded pistol tucked unobtrusively in one's coat pocket?"

If I answered, "Well, I'm not sure, but I do have some valid evidence that human activity has caused the temperature of the earth to increase in recent decades--surely you can deduce the answer from that," you'd think either I was being facetious or I was an idiot (maybe both; they can occur together--I don't know whether they are correlated).

But if I were to run up to you all excited & say, "hey, look--I found a correlation between believing that the temperature of the earth has not increased as a result of human causes in recent decades and believing that banning concealed handguns would cause crime to increase," you'd probably say, "So? Only a truly clueless dolt wouldn't have expected that."

You'd say that -- & be right, as the inset graphic, which correlates responses to the "industrial strength risk perception measure" as applied to "private ownership of guns" and "global warming," illustrates -- b/c "everyone knows" (they can just see) that our society is densely populated with "types of people" who form packages of related empirical beliefs in which the reality & consequences of human-caused climate change are inversely correlated with beliefs about the dangers posed by private ownership of handguns in the U.S.

The "types" are ones who share certain kinds of commitments relating to how society and other types of collective enterprises should be organized.  We can all see our social world is like that but because we can't directly observe people's "types" (they & the dispositions they impart are “latent variables”), we come up with observable indicators, like cultural worldviews" &/or "political ideologies" & various demographic characteristics, that we can combine into valid scales or classifying instruments of one sort or another. We can use those to satisfy our curiosity about the nature of the types & the dynamics that generate the puzzling pattern of empirical beliefs that they form on certain types of disputed risk issues.

We can all readily think of indicators of the sorts of “types” whose perceptions of the risks of climate change & guns are likely to be highly convergent, e.g.

Those risks are "politicized" in right-left terms, so we could use "right-left" political outlooks to specify the "types" & do a pretty decent job (a walk or bunt single; hey, it’s spring training!).

We could do even better (stand-up double) if we used the cultural cognition "worldview" scales -- & if we tossed in race & gender as additional indicators (say, by including appropriate cross-product interaction variables in a regression model), we'd be hitting a homerun!

But here’s another interesting thing that Patrick’s query—and the argument I’m guessing was the motivation for his posing it: our perceptions of the packages and the types aren’t always shared, or even when widely held, aren’t always right.

Not that surprising, actually, when you remember that the types can’t be directly observed. It helps too to realize that the source of our apprehension of these matters—the packages, the types—is based on a form of sampling rife with potential biases.  The “data,” as it were, that inform our perceptions are always skewed by the partiality of our social interactions, which reflect our propensity to engage with those who share our outlooks and interests. 

That sort of “selection bias” is a perfectly normal thing; only a lunatic would try to “stratify” his her social interaction to assure “representativeness” in his or her personal observations of how risk perceptions are distributed across types of persons (I suppose one could try applying population weights to all of one's interactions, although that would be time consuming & a nuisance).

But it does mean that we’ll inevitably disagree with our associates now & again—and even when we don’t disagree, all be wrong—about who fears what & why.

E.g., many people think that concern over childhood immunizations is part of one or another risk-perception package held by one or another recognizable “type” of person. 

Some picture them as  part of the package characteristic of the global-warming concerned, nuclear-power fearing tribe of “egalitarians, [who] oppose . . . big corporations and their products.”

When others grope at this particular elephant, they report feeling the “the conservative don’t-tread-on-me crowd that distrusts all government recommendations”—i.e., the same “type” that is skeptical of climate-change and nuclear-power risks.

Well, one or the other could have been right, but it turns out that they are both just plain wrong.

As the CCP report on Vaccine Risk Perceptions and Ad Hoc Risk Communication documents, all the recognizable “types”—whether defined in political or cultural terms—support universal childhood immunization.

The perception that vaccines cause autism is not part of the same risk-perception package as global warming: climate-change skeptics and climate-change believers both overwhelmingly perceive the risks of childhood immunizations to be low and the benefits of them to be high.

The misunderstandings about who is afraid of vaccines and why reflects selection bias in an echo chamber, reinforced by the reciprocal recriminations and expressions of contempt that pervade climate change discourse and that fill members of each with the motivation to see those on the other as harboring all sorts of noxious beliefs and being the source of myriad social ills. (Is this a new thing? Nope.)

So … back to Patrick’s question!

It’s not news—it’s a staple of the public study of risk perceptions and the cultural theory of risk in particular—that perceptions of climate-change and nuclear-power risks are part of a common “package” and are associated with distinctive types.

So my guess is that either Patrick or his friend (the one he was having an argument with; nothing inherently unfriendly about disagreeing!) was taking the position that GM-food risk perceptions was part of that same package as climate & nuclear ones.

Actually, the view that GM foods are “politically polarizing” is a common one.  “Unreasoning, anti-science” stances toward GM foods, according to this view, are for “liberals” what “unreasoning, anti-science” stances toward climate are for “conservatives.

But this is the toxic echo chamber once again.

As the 17.5 billion regular followers of this blog know (welcome, btw, to new readers!), GM foods get a big collective “enh,” at least in the view of the general public.  Most people have never really heard of GM foods, and happily consume humungous helpings of them at every meal.

Advocacy groups of a leftish orientation have been trying to generate concern—trying, moreover, by resort to exactly the “us-vs-them” incitement that is poisoning our science communication environment—but remarkably have been getting absolutely nowhere.

Here in the U.S.; matters are different in Europe. Why there but not here?! These things are truly mysterious—and if you don’t see that, you get a failing grade on the basic curiosity & imagination aptitude test.

Here are some data to illustrate that point and to answer Patrick’s question.

First, look at “packages”: 

Here gun-possession, nuclear, GM-foods, and childhood-vaccine risk perceptions are plotted in relation to climate change risk perceptions (the plotted lines reflect locally weighted regression -- they are "truer" to the raw data than a lnear regression line, reflecting the correlation coefficient I've also reported for each, would be).

Yes, GM food risk perceptions are correlated with global warming ones.  But the effect is very modest. It’s nothing like correlation between guns and climate change or nuclear and climate change.  You’ll find plenty of people—ones without two heads and who don’t think contrails are a government plot—who think climate change is a joke but GM foods a serious threat, and vice versa.

It’s really not part of the “climate change risk perception family.” 

How about in terms of “type”?

Enlarging a bit on some data that I’ve reported before, here are various risk perceptions plotted in relation to conventional left-right political views (measured with a composite scale that combines responses to party-identification and liberal-conservative ideology items):

Pretty clear, I think, that GM foods is just not a left-right issue.

As regular readers know, I’ve also examined GM food risks in relation to other types of “type” indicators, including the cultural cognition worldview scales and “interpretive community” scales derived from environmental risk perceptions.  It just doesn’t connect in a practically meaningful way.

So what to say?

Well, for one thing, there’s certainly no reason for embarrassment in finding out that things aren’t exactly as one conjectured on these matters.

As I said,  “risk packages”—because they reflect unobservable or “latent” dispositions, and because we are constrained to rely on partial and skewed impressions when we observe them—definitely have fuzzy peripheries.

In addition, the packages breed dynamics of misinformation, including the echo chamber effect and strategic behavior by deliberate science-communication environment polluters.

Under these circumstances, we should all adopt a stance of conscious provisionally toward our impressions here.  We shouldn’t “disbelieve” what our senses tell us, but we should expect evidence now & again that we have misperceived—and indeed, seek out such evidence before making decisions of consequence that turn on whether our perceptions are correct.

In the words of a famous scholar of risk perception—I can’t remember his name; early sign of senility?nah, couldn't be!—said (in some other language, but this is rough translation), “trust but verify!”

Maybe it’s just me, but I actually love it when evidence bites me in the ass on something like this.

Not just because I want to be sure the beliefs I hold are free of error, although of course I do feel that way.

But because every time the evidence surprises me I experience anew the sense of wonder at this phenomenon.

What is going on here?!  Why are there packages? Who are the “types”?

Why do some “risks”but not others become entangled in conflicts between diverse groups—all of which are amply stocked with individuals who are high in science comprehension and all of which have intact practices for transmitting what’s collective known to their members?

I really want to know the answers—and I know that I still just don’t!

“Tomorrow,” in fact, I’ll show you something that  is definitely freaking me out

Sunday
Mar092014

If you think GM food & vaccine risk perceptions have any connection to the "climate change risk perception" family, think again

Still another riff on the "GM food risks aren't polarizing" & "there's no cultural conflict over vaccine risks!" themes.

We all know that risk perceptions come in interesting -- indeed, downright mysterious-- packages.  But sometimes we get confused about what exactly is in them.

 

Friday
Mar072014

Q. Where do cultural predispositions come from in the cultural cognition theory? A. They are exogenous -- descriptivey & *normatively*!

A thoughtful friend & corrspondent asks:

The question that you must have been asked many times is, ultimately, how do people obtain their cultural orientations?


If I read between the lines, part of the answer seems to be that these orientations are seeded by the people we associate with and the authorities we seek — perhaps by chance. After that seed is planted, then it becomes a self-reinforcing process: We continue to seek like-minded company and authorities, which strengthens the orientations, and the cycle continues. But there must be more to it than that. Genetics? Some social or cultural adaptive process? I'd like to say something about how we arrive at our cultural orientations.

My response:

I think the model/process you describe is pretty much right. I'd say, though, that the cycle -- the seeking out, the reinforcement -- is not the problem; indeed, it's part of the solution to the puzzle of what makes it possible for people (diverse ones, ones who can't just be told what's what by some central authority) reliably to identify what's collectively known. They immerse themselves in networks of people who they can understand and are motivated to engage and cooperate with, and use their rational faculties to discern inside of those affinity groups who genuinely knows what about what (that is, who knows what's known to science). When this process short circuits & becomes a source of self-reinforcing dissensus, that is a sign not that the process is pathological but that a pathology has infected the process, disabling our normal and normally reliable rational capacity to figure out what's known.

However, we notice the cultural insularity of our process for figuring things out only then & infer "there's a problem w/ the insularity & self-reinforcement!" But that's a kind of selection bias in our attention to such things; we are observing the process only when it is failing in a spectacular way; if we paid attention to the billions of boring cases where diverse people agree, we'd see the same insularity in the process by which diverse people figure things out.  Then we'd properly infer that the problem is not the process but some external condition that corrupts it. At that point, we would focus our reason, guided by the methods of empirical inquiry, to figure out the dynamics of the pathology -- and ultimately to control them...   

You then ask me -- where do these affinities that are the source of the predispositions (the enviroment in which we figure out what's what) come from.  I don't know!  

Or I think likely I more or less know & the answer isn't *that* interesting: we are socialized into them by the accident of who are parents are & where we live.  That's the uninspiring "descriptive" account.  

A more inspiring normative answer (maybe it's just a story? but it has the right moral, morally speaking) is this: we are autonomous, reasoning agents in a free society; it is inevitable that we will we form a plurality of understandings of the best way to live.  That isn't the problem; it's the political way of life to be protected.  So let's take our cultural plurality as given & "solve" the "science communication problem" by removing the influences that conduce to dissensus & polarization, & that disrupt the usual consensus & convergence of free & reasoning citizens on the best (currently) available evidence....'

Some perhaps relevant posts (best I can do, until you help me): 

But I will invite others readers of this blog to comment--likely they can do better!

<

Thursday
Mar062014

Developing effective vaccine-risk communication strategies: *Definitely* measure, but measure what *counts*

Now that the the important Nyhan et al. study on vaccine-risk communications has gotten people's attention on the hazards of empirically uninformed vaccine-risk communication, it's important to reflect on exactly what it means for risk-communication to be genuinely evidence based. Here's a contribution toward the discussion, excerpted from the "Recommendations" section of the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication study. 

5. Reject story-telling alternatives to valid empirical analysis of public perceptions of vaccine risk.
 

Decision science has established a rich stock of mechanisms, from “anchoring” to “availability,” from “probability neglect” to “hyperbolic discounting,” from “overconfidence bias” to “pluralistic ignorance.” Treating them as a collection of story-telling templates, a person of even modest intelligence can easily use these dynamics to fabricate plausible “scientific” “explanations” for any observed social phenomenon (e.g., Brooks 2012). But the narrative coherence of such syntheses furnishes no grounds for crediting them as true. They are at best conjectures—fuel for the empirical-testing engine that alone propels genuine insight (Kahan 2014)—and when not acknowledged as such suggest either the expositor’s ignorance of the difference between story-telling and science or his or her intention to exploit the lack of such understanding on the part of others (Rachlinski 2001).

The case of vaccine-risk perceptions supplies a compelling example of the dangers of treating this genre of writing as a source of reliable guidance for practical decisions. In a compelling proof of the utility of decision science as a grab-bag of prefabricated story-telling templates, numerous commentators, popular and even scholarly, have used the inventory of mechanisms it comprises to “explain” a nonexistent phenomenon—namely, a “growing public distrust” of the safety of vaccinations (e.g., MacDonald, Smith & Appleton 2012).

Risk communication is a critical element of public health policy. It is a mistake for public health officials and professionals to exempt it from their field’s norm of evidence-based practice.

The number of genuine and valid empirical examinations of the general public’s perceptions of childhood vaccines is regrettably smaller than it should be. But both to promote the enlargement of it and to protect public health policy from the potentially deleterious consequences of seeking guidance from faux-empirical substitutes, those committed to conserving the high existing level of public support for universal immunization should base their risk-communication strategies on empirically informed assessments of who fears what and why in the domain of childhood vaccines.

6. Use behavioral measures to assess behavior; use fine-grained, field research & not surveys/polls to understand dynamics of resistance.
 

This study combined an attitudinal survey and an experiment. When administered to a diverse and appropriately recruited sample, attitudinal surveys enable measurement of the impact of affective and group-affinities on societal risk perceptions and information processing. These dynamics are important, because they reflect the quality of the science communication environment in which individuals evaluate risk information relevant to personal and collective decisionmaking.

But as stressed at the outset, survey methods alone are not valid for assessing the impact of vaccine-risk perceptions on the actual decisions of parents to permit their children to be vaccinated. Parents’ self-reports are not a reliable or valid measure of their children’s vaccination status; only behavioral measures akin to those reflected in the NIS are. Accordingly, researchers who use observational methods to investigate variance in vaccination coverage should rely on the NIS or on other valid behavioral measures of vaccination status (Opel et al. 2011b, 2013b).

The study results also suggest two other important limitations on survey methods. First, survey measures are unlikely to support valid inferences about the proportion of the public that holds beliefs or opinions on specific issues relating to vaccines, including the likelihood that vaccines cause autism or other diseases.

Because members of the public often have not formed opinions on or given meaningful attention to specific public policy issues (e.g., stem cell research), it is a mistake in general to treat specifically worded survey items (“Based on what you have read or heard, do you think the federal government should or should not fund federal stem cell research?”) as genuinely measuring positions on those matters (Bishop 2005; Schuman 1998). If such items are reliably measuring anything, it is an expression of a more generally pro- or con-attitude that is evoked by the item (in the case of stem cells, positions on “government spending” or possibly “abortion” or even simple partisan affiliation). What that attitude consists in cannot be reliably analyzed unless responses to the item in question are compared with responses to other items that would help to pin down the latent disposition that they are measuring (Berinsky & Drukman 2007).

The coherence of the responses to the items that made up the PUBLIC_HEALTH scale—and in particular, the high, inverse correlation between the perceived risks of vaccines and the perceived benefits of them—suggest that what those items are measuring is an affective orientation (Slovic 2010) toward childhood vaccines. Under these circumstances, reliable inferences can be drawn from vaccine-risk/benefit items only about the valence of individuals’ affective orientation. But no single item can reliably be treated as revealing anything more specific—or more edifying—than that.

This point was illustrated by responses to the item on “postnatal isoerythrolysis.” Survey participants’ beliefs that childhood vaccination caused this fictional disease—one they necessarily had not heard of before—were highly correlated with their responses to every one of the other diverse risk-benefit items used to form the PUBLIC_HEALTH scale. Rather than reflecting a specific belief formed on the basis of exposure to information on vaccine risks, the affective orientation measured by “vaccine disease risk” items should be interpreted as an emotional predisposition to credit or dismiss propositions conditional on their perceived conformity to one’s orientation (Loewenstein et al. 2001; Slovic et al. 2004).

Researchers might well have good reason to assess public knowledge about specific issues such as the impact that vaccines have on the risk of contracting autism or other diseases. But to do so, they will need to follow the steps necessary to form valid measures of such knowledge. Or in other words, they will need to use the psychometric tools that distinguish scholarly opinion research from popular opinion polling (Bishop 2005).

Second, general-population survey measures cannot be expected to generate insight into the identity or motivations of that portion of the population that is genuinely hostile to childhood vaccination. As the analysis of sources of variation in the PUBLIC_HEALTH scale revealed, none of the familiar cultural styles divided over other societal risks (such as those associated with climate change or nuclear power) has a negative affective orientation toward vaccines. To the extent that they explain any variance at all, these styles are associated only with differences in the intensity of the positive affective orientation toward vaccines that prevails in all these groups. Accordingly, none of the demographic or attitudinal indicators used to identify members of those groups can be expected to identify the characteristics that indicate the presence of whatever group identity might be shared by members of the “anti-vaccine” fringe.

There are without question groups of individuals, some in geographically concentrated areas, who are hostile to childhood vaccines (Mnookin 2011). Who they are and why they feel the way they do are questions that merit serious study. But to answer these questions, researchers will need to use measures that are more fine-grained and discerning than the ones that can profitably be made use of in studying the small class of risk issues on which there is genuine cultural contestation.

Such research is now emerging. In a pair of studies, Opel and his collaborators (2011a, 2011b, 2013b) have devised a “vaccine hesitancy” scale for new parents that predicts delay or avoidance of vaccination behavior. Such a screening device would be comparable to ones used in diverse fields from credit assessment (e.g., Klinger, Khwaja & Lamonte 2013) to organizational staffing (e.g., Ones et al. 2007), not to mention to ones used to predict or diagnose disease vulnerability (e.g., Wilkins et al. 2013). If perfected, it could be used by researchers to guide their investigation of who fears vaccines and why and to focus their testing of risk communication materials.

Resources—financial, intellectual, and social—should be devoted to the extension and refinement of these methods rather than ones that focus on attitudinal correlates of vaccine risk perceptions in more diffuse elements of the general population. In order for vaccine-risk communication to be empirically informed, it is essential not only to measure but to measure what counts.

7. Empirical study should be used to develop appropriately targeted risk communication strategies that are themselves appropriately responsive to empirically identified risk-perception concerns.
 

Anyone who dismisses the existence or seriousness of unfounded fears of childhood vaccines would be behaving foolishly. Skilled journalists and others have vividly documented enclaves of concerted resistance to universal immunization programs. Experienced practitioners furnish credible reports of higher numbers of parents seeking counsel and assurance of vaccine safety. And valid measures of vaccination coverage and childhood disease outbreaks confirm that the incidence of such outbreaks is higher in the enclaves in which vaccine coverage falls dangerously short of the high rates of vaccination prevailing at the national level (Atwell et al. 2013; Glanz et al. 2013; Omer et al. 2008).

At the same time, only someone insufficiently attuned to the insights and methods of the science of science communication would infer that this threat to public health warrants a large-scale, sweeping “education” or “marketing” campaign aimed at parents generally or at the public at large. The potentially negative consequences of such a campaign would not be limited to the waste of furnishing assurances of safety to large numbers of people who are in no need of it. High-profile, emphatic assurances of safety themselves tend to generate concern (Kahan 2013a; Kasperson et al. 1988). A broad scale and indiscriminant campaign to communicate vaccine safety—particularly if understood to be motivated by a general decline in vaccination rates—could also furnish a cue that cooperation with universal immunizations programs is low, potentially undermining reciprocal motivations to contribute to the public good of herd immunity. Lastly, such a campaign would create an advocacy climate ripe for the introduction of cultural partisanship and recrimination of the sort known to disable citizens’ capacity to recognize valid decision-relevant science generally (Bolsen & Druckman 2013; Kahan 2012), and valid science relevant to vaccines in particular (Gollust, Dempsey, Lantz, Ubel & Fowler 2010; Kahan, Braman, Cohen, Gastil & Slovic 2010).

The right response to dynamics productive of excess concern over risk is empirically informed risk communication strategies tailored to those specific dynamics. Relevant dynamics in this setting include not only those that motivate enclaves of resistance to universal immunization but also those that figure in the concerns of individual parents seeking counsel, as they ought to, from their families’ pediatricians. Risk communication strategies specifically responsive to those dynamics should be formulated (e.g., NCIRS 2013)—and they should be tested, both in the course of their development and in their administration (Shourie et al. 2013), so that those engaged in carrying them out can be confident that they are taking steps that are likely to work and can calibrate their approach as they learn more (Sadaf et al. 2013; Opel et al. 2012).

Again, preliminary research of this sort has commended. Perfection of behavioral-prediction profiles of the sort featured in Opel et al. (2011a, 2011b, 2013b) would not only enable researchers to extend understanding of the sources and consequences of genuine vaccine hesitancy but also to test focused risk-communication strategies on appropriate message recipients.  If made sufficiently precise, screening protocols of this sort would also enable practitioners to accurately identify parents in need of counseling, and public health officials to identify regions where the extent of hesitancy warrants intervention.

The public health establishment should exercise leadership to make health professionals and other concerned individuals and groups appreciate the distinction between targeted strategies of this sort and the ad hoc forms of risk communication that were the focus of this study.  They should help such groups understand in addition that support for the former does not justify either encouragement or tolerance of the latter. 

Refs

Atwell, J.E., et al. Nonmedical Vaccine Exemptions and Pertussis in California, 2010. Pediatrics 132, 624-630 (2013).

Berinsky, A.J. & Druckman, J.N. The Polls—Review: Public Opinion Research and Support for the Iraq War. Public Opin Quart 71, 126-141 (2007).

Bishop, G.F. The Illusion of Public Opinion : Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, Lanham, MD, 2005).

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series (2013). Available at http://www.ipr.northwestern.edu/publications/papers/2013/ipr-wp-13-11.html

Bowles, S. & Gintis, H. A cooperative species : Human reciprocity and its evolution (Princeton University Press, Princeton, 2013).

Brooks, D. The Social Animal : The Hidden Sources of Love, Character, and Achievement (Random House Trade Paperbacks, New York, 2012).

Glanz, J.M., et al. A Population-Based Cohort Study of Undervaccination in 8 Managed Care Organizations across the United States Managed Care Organizations. JAMA pediatrics 167, 274-281 (2013).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy Undermines Support for State Mandates on the Human Papillomavirus Vaccine. Health Affair 29, 2041-2046 (2010).

Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013a).

Kasperson, R.E., et al. The Social Amplification of Risk: A Conceptual Framework. Risk Analysis 8, 177-187 (1988).

Klinger, B., Khwaja, A. & LaMonte, J. Improving credit risk analysis with psychometrics in Peru. (Inter-American Development Bank, 2013). Available a0074

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).

MacDonald, N.E., Smith, J. & Appleton, M. Risk Perception, Risk Management and Safety Assessment: What Can Governments Do to Increase Public Confidence in Their Vaccine System? Biologicals 40, 384-388 (2012).

Mnookin, S. The Panic Virus : A True Story of Medicine, Science, and Fear (Simon & Schuster, New York, 2011).

NCIRS, MMR Decision Aid (2013). Available at http://www.ncirs.edu.au/immunisation/education/mmr-decision/index.php.

Omer, S.B., et al. Geographic Clustering of Nonmedical Exemptions to School Immunization Requirements and Associations with Geographic Clustering of Pertussis. American Journal of Epidemiology 168, 1389-1396 (2008).

Ones, D.S., Dilchert, S., Viswesvaran, C. & Judge, T.A. In support of personality assessment in organizational settings. Personnel Psychology 60, 995-1027 (2007)

Opel, D.J., et al. Characterizing Providers’ Immunization Communication Practices During Health Supervision Visits with Vaccine-Hesitant Parents: A Pilot Study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., et al. Characterizing Providers’ Immunization Communication Practices During Health Supervision Visits with Vaccine-Hesitant Parents: A Pilot Study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., et al. Development of a survey to identify vaccine-hesitant parents: The parent attitudes about childhood vaccines survey. Human Vaccines 7, 419-425 (2011a).

Opel, D.J., et al. The Relationship between Parent Attitudes About Childhood Vaccines Survey Scores and Future Child Immunization Status: A Validation Study. JAMA pediatrics 167, 1065-1071 (2013b).

Opel, D.J., et al. Validity and reliability of a survey to identify vaccine-hesitant parents. Vaccine 29, 6598-6605 (2011b).

Otto, S. Antiscience Beliefs Jeopardize U.S. Democracy. in Scientific American (Oct. 16, 2012a), available at http://www.scientificamerican.com/article.cfm?id=antiscience-beliefs-jeopardize-us-democracy.

Otto, S.L. One Way to Help Science: Become Republican. Nature Medicine 18, 17 (2012b).

Rachlinski, J.J. Comment: Is Evolutionary Analysis of Law Science or Storytelling. Jurimetrics 41, 365-370 (2001).

Sadaf, A., Richards, J.L., Glanz, J., Salmon, D.A. & Omer, S.B. A Systematic Review of Interventions for Reducing Parental Vaccine Refusal and Vaccine Hesitancy. Vaccine 31, 4293-4304 (2013).

Shourie, S., Jackson, C., Cheater, F., Bekker, H., Edlin, R., Tubeuf, S., Harrison, W., McAleese, E., Schweiger, M. & Bleasby, B. A cluster randomised controlled trial of a web based decision aid to support parents’ decisions about their child's Measles Mumps and Rubella (MMR) vaccination. Vaccine 31, 6003-6010 (2013).

Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).

Wilkins, C.H., Roe, C.M., Morris, J.C. & Galvin, J.E. Mild physical impairment predicts future diagnosis of dementia of the Alzheimer’s type. Journal of the American Geriatrics Society 61, 1055-1059 (2013).

Tuesday
Mar042014

A nice empirical study of vaccine risk communication--and an unfortunate, empirically uninformed reaction to it

Pediatrics published (in “advance on-line” form) an important study yesterday on the effect of childhood-vaccine risk communication. 

The study was conducted by a team of researchers including Brendan Nyhan and Jason Reifler, both of whom have done excellent studies on public-health risk communication in the past

NR et al. conducted an experiment in which they showed a large sample of U.S. parents with children age 17 or under communications on the risks and benefits of childhood vaccinations.  

Exposure to the communications, they report, produced one or another perverse effect, including greater concern over vaccine risks and, among a segment of respondents with negative attitudes toward vaccines, a lower self-reported intent to vaccinate any “future child” for MMR (mumps, measles, rubella).

The media/internet reacted with considerable alarm: “Parents Less Likely to Vaccinate Kids After Hearing Government’s Safety Assurance”; “Trying To Convince Parents To Vaccinate Their Kids Just Makes The Problem Worse”; “Pro-vaccination efforts, debunking autism myths may be scaring wary parents from shots”. Etc.

Actually, I think this a serious misinterpretation of NR et al.

The study does furnish reason for concern. 

But what we should be anxious about, the NR et al. experiment shows, is precisely the simplistic, empirically uninformed style of risk communication that many (not all!) of the media reports on the study reflect.

To appreciate the significance of the study, it’s useful to start with the distressing lack of connection between fact, on the one hand, and the sort of representations that media and internet commentators constantly make about the public’s attitude toward childhood immunizations, on the other.

The message of these ad hoc risk communicators consists of a collection of dire (also trite & formulaic) pronouncements: a  “growing crisis of public confidence—an “epidemic of fear,” among a  large and growing number” of “otherwise mainstream parents”—has generated  an “erosion in immunization rates,leading, “predictably to the resurgence of diseases considered vanquished long ago. From Taliban fighters to California soccer moms, those who choose not to vaccinate their children against preventable diseases are causing a public health crisis.”

According to the best available evidence, as collected and interpreted by the nation’s most authoritative public health experts, this story is simply false.

Childhood vaccine rates are not “eroding” in the U.S. 

Coverage for MMR, for pertussis (“whooping cough”), for polio, for hepatitis-b—all have been over 90%, the national public health target, for over a decade.  The percentage of children whose parents refuse to permit them to receive any of the recommended childhood vaccines has remained under 1% during this time.

Every year, with the release of the latest results of the National Immunization Survey, the CDC issues a press release to announce the “reassuring” news that childhood immunization rates either “remain high” or are “increasing.” “ ‘Nearly all parents are choosing to have their children protected against dangerous childhood diseases,’ ” the officials announce.

There’s definitely been a spike in whooping cough cases in recent years. 

But “[p]arents refusing to get their children vaccinated,” according to the CDC, are “not the driving force behind the[se] large scale outbreaks.” In addition to “increased awareness, improved diagnostic tests, better reporting, [and] more circulation of the bacteria,” the CDC has identified “waning immunity “from an ineffective booster shot as one of the principal causes.”

Measles have deemed eliminated in the United States but can be introduced into U.S. communities by individuals infected during travel abroad.  

Fortunately, “[h]igh MMR vaccine coverage in the United States (91% among children aged 19–35 months),” the CDC states, “limits the size of [such] outbreaks.” “[D]uring 2001–2012, the median annual number of measles cases reported in the United States was 60 (range: 37–220).”

The “public health crisis” theme that pervades U.S. media and internet commentary dates to the 1998 publication in the British medical journal Lancet of a bogus and since-retracted study that purported to find a link between the MMR vaccine and autism.

The study initiated a genuine panic, and a demonstrable decline in vaccine rates, in the U.K.

Public health officials were eager to head off the same in the U.S., and advocacy groups and the media were—appropriately!—eager to pitch in to help.

Fortunately, the flap over the bogus study had no effect on U.S. vaccination rates, which have historically been very high, or on the attitudes of the general public, which have always been and remain overwhelmingly positive toward universal immunization.

But through an echo-chamber effect, the “public health crisis” warning bells have continued to clang—all the louder, in fact, over time.

One might think—likely some of those who are continuing to sound this alarm do—that the persistent “red alert” status can’t really do any harm.

But that’s where the public-health risk of not having a coordinated, empirically informed, evidence-based system of risk communication comes in.

It’s a well established finding in the empirical study of public risk perceptions that emphatically reassuring people that a technology poses no serious risk in fact amplifies concern

How other people in their situation are reacting is an important cue that ordinary members of the public rely on to gauge risk.  The message “many people like you” are afraid thus excites apprehension, even if the message is embodied in an admonition that there’s nothing to worry about.

This anxiety-amplification effect doesn’t mean that one shouldn’t try to reassure genuinely worried people when their concerns are in fact not well founded, because in that case the benefits of accurate risk information, if communicated effectively, will hopefully outweigh any marginal increase in apprehension, which is likely to be small if people are already afraid.

But the anxiety-amplification effect of risk reassurance does mean that it is a mistake to misleadingly communicate to unworried people that people in their situation— a  large and growing number” of “otherwise mainstream parents”; “California soccer moms” (etc. etc., blah blah)—are worried when they aren’t!  In that situation, the message “all of you foolish people are needlessly worried—JUST CALM DOWN!” generates real risk of inducing fear without creating any benefit.

The excellent NR et al. study furnishes evidence to be concerned that ad hoc, empirically uninformed vaccine-risk communication could have exactly this effect.

The NR et al. featured a variety of “risk-benefit” communications.  One was  a fairly straightforward report that rebutted the claim that vaccines cause autism. Two others stressed the health benefits of vaccination, one in fairly analytic terms and the others in a vivid narrative in which a parent described the terrifying consequences when her unvaccinated child contracted measles.

The result?

Consistent with the anxiety-amplification effect, subjects who received the vivid narrative communication became more concerned about the side effects of getting the MMR vaccine.

The impact of the blander communication that refuted the MMR-autism link was mixed.

Overall, the subjects in that condition were in fact less likely to agree that vaccines cause autism than parents in a control condition.

They were no less likely than parents in the control to believe that the MMR vaccine has “serious side effects.”  But they weren’t any more likely to believe that either.

The MMR-autism refutation communication did have a perverse effect on one set of subjects, however.

NR et al. measured the study participants’ “vaccine attitudes” with a scale that assessed their agreement or disagreement with items relating to the risks and benefits of vaccines (e.g., “I am concerned about serious adverse effects of vaccines”).  The majority of parents expressed positive attitudes.

But among those who held the most negative attitudes, the self-reported intention to vaccinate any “future child” for MMR was actually lower in the group exposed to the communication that refuted the MMR-autism link than it was among their counterparts in the control condition.

What should we make of this?

I don’t think it would be correct to infer that from the experiment that vaccine-safety “education” will always “backfire” or that trying to “assure” anxious parents will make them “less likely to vaccinate” their children.

In fact, that interpretation would itself be empirically uninformed.

For one thing, NR et al. used “self-report” measures, which are well known not to be valid indicators of vaccination behavior.  Indeed, parents’ responses to survey questions grossly overstate the extent to which their children are not immunized.

Great work is being done to develop a behaviorally validated attitudinal screening instrument for identifying parents who are genuinely likely not to vaccinate their children. 

But that research itself confirms that many, many more parents say “yes” when asked if they are concerned that vaccines might have “serious side effects”—the sort of item featured NR et al. scale—than refrain from vaccinating their children.

What’s more, the NR et al. sample was not genuinely tailored to parents who have children in the age range for the MMR vaccine. 

That first MMR dose is administered at one year of age, and the second before age 4 or 5. 

The NR et al. parents had children “17 or younger.”

The mean age of the study respondents is not reported, but 80% were over 30, and 40% over 40.  So no doubt many were past the stage in life where they’d be making decisions about whether any “future” child should get the MMR vaccine.

What are survey respondents who aren’t genuinely reflecting on whether to vaccinate their children telling us when they say they “won’t”?

This is a question that CCP’s recent Vaccine Risk and Ad Hoc Risk Communication Study helps to answer.

When scales like the one featured in NR et al. are administered to members of the general public, they measure a more generic affective attitude toward vaccination.

The vast majority of the U.S. public has a very positive affective orientation toward vaccines

An experiment like the one NR et al. conducted is instructive on how risk communication might influence that sort of general affective orientation. And what their experiment found is that there’s good reason to be concerned that the dominant, ad hoc empirically uninformed style of risk communication (on display in coverage of their study) can in fact adversely affect that attitude.

That finding is consistent with the ones reported in the CCP study, which found that stories emphasizing the “public health crisis” trope cause people to grossly overestimate the extent to which parents in the U.S. are resisting vaccination of their children.

The CCP study also found that the equation of “vaccine hesitancy” with disbelief in evolution and skepticism about climate change—another popular trope—can create cultural polarization over vaccine safety among diverse people who otherwise all agree that vaccine benefits are high and their risks low.

That finding is closely related, I suspect, to the perverse effect that NR et al. experiment produced in the self-reported “intent to vaccinate” response of the small group of respondents in their sample who had a negative attitude toward vaccines.

The dynamic of motivated reasoning predicts that individuals will “push back” when presented information that challenges an identity-defining belief. 

There aren’t many individuals in U.S. society whose identity includes hostility to universal vaccination—they are an outlier in every recognizable cultural group.

But it’s not surprising that they would express that belief with all the more vehemence when shown information asserting that vaccines are safe and effective and they immediately asked whether they’d vaccinate “future children”

The NR et al. study is superbly well done and very important.

But the lesson it teaches is not that it is “futile” to try to communicate with concerned parents.

It’s that it is a bad idea to flood public discourse in a blunderbuss fashion with communications that state or imply that there is a “growing crisis of confidence” in vaccines that is “eroding” immunization rates.

It’s a good idea instead to use valid empirical means to formulate targeted and effective vaccine-safety communication strategies.

As indicated, there is in fact an effort underway to develop behaviorally validated measures for identifying parents who are most at risk of vaccine hesitancy (who make up a much smaller portion of the already relatively small portion of the population who express a “negative attitude” toward vaccines when responding to public opinion survey measures). With that sort of measure in hand, researchers test counseling strategies (ones informed, of course, by existing research on what works in comparable areas) aimed at precisely at the parents who would benefit from information.

The public health establishment needs to make clear that that sort of research merits continued, and expanded support.

In addition, the public health establishment needs to play a leadership role in creating a shared cultural understanding—among journalists, advocates, and individual health professionals—that risk communication, like all other elements of public-health policy, must be empirically informed.

The NR et al. study furnishes an inspiring glimpse of how much value can be obtained from evidence-based methods of risk communication.

The reaction to the study underscores how much risk we face if we continue to rely on an ad hoc, evidence-free style of risk communication instead.

Wednesday
Feb262014

Geoengineering the science communication environment: the cultural plasticity of climate change risks part II

So … a couple of days ago I posted something on the topic of “geoengineering.”

I'm pretty fascinated by the advent of research and discussion of this new technology, which of course "refers to deliberate, large-scale manipulations of Earth’s environment designed to offset some of the harmful consequences of [greenhouse-gas induced] climate change."

For one thing, geoengineering presents a splendid, awe-inspiring pageant of human ingenuity. 

from 20/20science.orgConsider David Keith’s idea, presented in an article published in the Proceedings of the National Academy of Sciences, of deploying a fleet of thermostatically self-regulating, mirror-coated, nanotechnology flying saucers, which would be programmed to assemble at the latitude and altitude appropriate to reflect back the precise amount of sun light necessary to offset the global heating associated with human-caused CO2 emissions.

The only thing needed to make this the coolest (as it were) technological invention ever would be the addition of a force of synthetic-biology engineered E. Coli pilots, who would be trained to operate the nanotechnology flying saucers while also performing complex mathematical calculations in aid of computationally intensive tasks (such as climate modeling or intricate sports-betting algorithms) back on the surface of the earth!

But another reason I find geoengineering so fascinating is its potential to invert the cultural meanings of climate change risk.

This is what I focused on in my last post

There I rehearsed the account that the “cultural theory of risk” gives for climate change conflict. “Hierarchical individualists” are (unconsciously) motivated to resist evidence of climate change because they perceive that societal acceptance of such evidence would justify restrictions on markets, commerce, and industry—activities they value, symbolically as well as materially.

“Egalitarian communitarians,” by the same logic, readily embrace the most dire climate-change forecasts because they perceive exactly the same thing but take delight at the prospect of radical limits on commerce, industry, and markets, which in their eyes are the source of myriad social inequities.

My point was that, if we accept this basic story (it’s too simple, even as an account of how cultural cognition works; but that’s in the nature of “models” & should give us pause only when the simplification detracts from rather than enhances our ability to predict and manage the dynamics of the phenomenon in question), then there’s no reason to view the valences of the cultural meanings attached to crediting climate change risk as fixed or immutable.  One could imagine a world in which crediting evidence of human-caused climate change and the risks it poses gratify hierarchical and individualistic sensibilities and threaten egalitarian communitarian ones.

Indeed, one could, in theory, make such a world with geoengineering.  Or make it simply by initiating a sufficiently serious and visible national discussion of it as one potential solution to the problems associated with global warming.

As I explained, geoengineering stands the cultural narrative associated with climate change on its head.  Ordinarily, the message of climate change advocacy is “game over!” & “told you so!”: your inquisitive, market-driven forms of manipulation of the environment to suit your selfish desires are killing us and now must end!

The message of the geoengineering, however, is “more of the same!” & “yes, we can!”: we’ve always managed to offset the environmental byproducts of commerce, industry, markets etc. with more commerce and market-fueled ingenuity (see the advent of modern sewage treatment as a means of overcoming "natural limits" on population density in big cities)—well, the time is here to do it again!

By making a culturally affirming meaning available to hierarch individualists, geoengineering reduces the psychic cost for them of engaging open-mindedly with evidence that human-caused climate change puts us in danger.

Of course, by attenuating the identity-affirming meaning that climate change now has for egalitarian communitarians—by suggesting that we needn’t go on a “diet” to counter the effects of our “planetary over-indulgence”; we have the option “atmospheric liposuction” at our disposal!—geoengineering could well expected to provoke a skeptical orientation in egalitarian communitarians, not only toward geoengineering but toward climate change science that implies the necessity and feasibility of conscious interventions to offset the impact of carbon emissions on the environment.

CCP did a study (to be published in Ann. Am. Acad. Pol. & Soc. Sci.) that tested these hypotheses.

In it, we instructed the subjects—nationally representative samples of 1500 US adults and 1500 English ones—to read a study on human-caused climate change.  A composite of real studies appearing  in Nature and Proceedings of the National Academies of Sciences, the study presented evidence that CO2  dissipates from the atmosphere much more sluggishly than scientists had previously anticipated.

As a result, the composite study concludes, phasing in strict CO2 limits (450-600 ppm) will have less beneficial impact than had previously been predicted.  Indeed, even if carbon emissions ended today, there’d still be substantial detrimental impacts—in the form of massive submersion of highly populated coastal regions due to continuing sea-level rise, and famine-inducing droughts in interior regions due to shifting weather patterns.

We then tested our subjects’ evaluation of the validity of the study.  For this purpose, we instructed them to indicate their level of agreement or disagreement with statements such as “the scientists who did the study were biased,” “computer models like those relied on in the study are not a reliable basis for predicting the impact of  CO2 on the climate,” and “more studies must be done before policymakers rely on the findings” of the study etc.

The sorts of arguments that typically are advanced by climate skeptics, these items enabled us to form a “dismissiveness” scale that reflected how closed or open-minded the subjects were in assessing this evidence of climate change.

We found, not surprisingly, that subjects disposed toward hierarchical and individualistic values—in both the U.S. and the English samples—were highly dismissive, while ones disposed toward egalitarian and communitarian values were highly receptive to the evidence presented in the composite study.

But that was in a control condition in which the subjects, before reading the composite study and indicating their views of its validity, read a story about a city meeting on a traffic-light proposal, a matter completely unrelated to climate change.

There were two other experimental conditions.  In the “anti-pollution” condition, subjects read a news story that reported that expert scientists were demanding implementation of stricter carbon emissions to offset the deleterious effects of climate change. In the “geoengineering” condition, in contrast, the subjects read a news story that reported that expert scientists were calling for more research on geoengineering in responses to those same anticipated effects. 

Logically, the information in these news stories is no more related to the validity of the climate-science study that the subjects were subsequently asked to read and evaluate than was the information in the control-condition news story on traffic lights: either the evidence on carbon dissipation is valid or it isn’t; its validity doesn’t depend on what we are going to do if it is—restrict carbon emissions all the more or consider geoengineering; indeed, the evidence it is not valid, that issue is moot.

But psychologically, the cultural cognition thesis predicts that which condition the subjects were assigned to could matter. 

The subjects in the geoengineering condition were seeing climate change connected to cultural meanings—“more of the same” & “yes, we can!”—that are different from the usual “game over!” & “told you so!” ones, which the anti-pollution news story was geared to reinforcing.

Because the congeniality of the cultural meaning of information shapes how readily they engage with the content of it, we predicted that the hierarchical individualists in the geoengineering condition would respond much more open-mindedly to the information from the climate change study on carbon dissipation.

And that prediction turned out to be true. 

In addition, cultural polarization over the validity of the climate-change study was lower for both U.S. and English subjects in the geoengineering condition than in the anti-pollution condition, where polarization was actually larger for U.S. subjects than it was in the control.

But part of the reason that polarization was lower in the geoengineering condition was that egalitarian communitarians who read the geoengineering news story reacted less open-mindedly toward the climate-change study than their counterparts who first read the anti-pollution news story.

The egalitarian communitarians in the anti-pollution conditon saw no tension between-- indeed, likely perceived an affinity between-- the dire conclusions of the study and the  “game over!”/“told you so!” meanings that they attach to climate change.

But the conflict between those meanings and the narrative implicit in the "geoengineering" condition woke the egalitarian communitarians up to the CO2 dissipation study's potential policy implications: if CO2 reductions won't be enough to stave off disaster, then we are going to have to do something more.  

Primed to see that the "more" was geoengineering-- "more of the same!"/"yes, we can!"--many egalitarian communitarian subjects pushed back on the premise, either adopting or rejecting with less vehemence the dismissive responses that climate skeptics typically express toward evidence of human-caused climate change.

In sum, by inverting the cultural meanings attached to such evidence, the geoengineering news story made the hierarchical individualists more inclined to believe and egalitarian communitarians more inclined to be skeptical of climate change. That's a pretty nice corroboration, I think, of the cultural cognition thesis!

I don’t think, however, that this result suggests the advent of geoengineering as subject of research and as an issue for public discussion will be a zero sum game for public engagement with climate science.

First, contrary to the warnings of some commentators, subjects exposed to the geoengineering information did not become less concerned about climate change.  Overall, they became more.

Second, the egalitarian communitarians in the geoengineering condition were less open-minded in their assessment of climate change evidence than those in the anti-pollution condition. But in absolute terms, they were still plenty open-minded—indeed, more open-minded, less dismissive—than hierarchical individualists in that very condition.

Third, the major impediment, I’m convinced, to constructive public engagement with climate science is not how much either side knows or understands scientific evidence of it.  It’s their shared apprehension that opposing positions on climate change are, in effect, badges of membership in and loyalty to competing cultural groups; that is the cue or signal that motivates members of the public to process information about climate change risks in a manner that is more reliably geared to affirming the position that predominates in their group than to converging on the best available evidence.

The key, then, is to clear the science communication environment of the toxin of antagonistic cultural meanings that now envelop the climate change issue.

The advent of public discussion of geoengineering, the CCP study implies, can help to achieve this desirable result by seeding public deliberations over climate change with meanings  congenial to a wider array of cultural styles.

 

Monday
Feb242014

Shockingly sad news . . . 

A model of models for the good life of being a scholar . . . .

Monday
Feb242014

Geoengineering & the cultural plasticity of climate change risk perceptions: Part I

from Thehoustonfreethinkers.com

 Yesterday I posted a small section of a CCP paper, scheduled for publication in the Annals of the American Academy of Political & Social Sciences, that reports the results of a study on how emerging research on and public discussion of geonengeering might affect the science communciation environment surrounding climate change.

I’ve been thinking of geoengineering again recently, mainly because on my trip to Cardiff University I got a chance to discuss public attitudes toward it—existing and anticipated—with Nick Pidgeon, who along with Adam Corner and other members of the Cardiff Understanding Risk Group, has been doing some great studies of this topic.

How the public will perceive geoengineering is fascinating for all kinds of reasons, but the one that I find the most intriguing is geoengineering’s inversion of the usual cultural meanings of climate change risk. 

According to the cultural cognition thesis, we should expect persons who are relatively hierarchical and individualistic to be climate change skeptics: crediting evidence of the dangers posed by human-caused climate change implies that we should be restricting commerce, industry, markets, and other forms of private orderings—activities of extreme value, symbolic as well as material, to people with these outlooks.

By the same token, we should expect persons who are egalitarian and communitarian to be highly receptive to evidence of the danger of climate change: because they already are morally suspicious of commerce, industry, and markets, to which they attribute unjust social disparities (actually, they might like to take a look at the disparities that existed in pre-market societies & figure out which ones were greater, but that’s another matter!), they find it congenial to see those activities as sources of danger that ought to be restricted.

This is the plain vanilla rendering of Douglas & Wildavsky’s “cultural theory of risk” (I don't actually buy it, to tell you the truth!)—and, indeed, Wildavsky, who died in 1993 (at the early age of 63), had already characterized global warming as “the mother of all environmental scares”:

Warming (and warming alone), through its primary antidote of withdrawing carbon from production and consumption, is capable of realizing the environmentalist’s dream of an egalitarian society based on rejection of economic growth in favor of a smaller population’s eating lower on the food chain, consuming a lot less, and sharing a much lower level of resources much more equally. 

But Wildavsky—a mainstream political liberal whose experience with the radical “free speech” movement at Berkeley left him obsessed with the “rise of radical egalitarianism”—puts a spin on climate change that contravenes the fundamental symmetry of the laws of cultural cognition. 

That is, he seems to imply that it’s only “egalitarian collectivists” who will be motivated to assign to evidence of climate change risks a significance biased in favor of their preferred way of life.

But if, as Douglas and Wildavsky so adamantly insisted in Risk and Culture, “[e]ach form of social life has its own typical risk portfolio”—if  all “people select their awareness of certain dangers to conform with a specific way of life,” and thus “each social arrangement elevates some risks to a high peak and depresses others below sight”—then there's no more reason to expect hierarchical individualists to form reliable perceptions of climate change risks than egalitarian communitarians.

Wildavsky would have come closer to conveying the logic of his and Douglas’s own position, then, if he had called global warming the “mother of all environmental risk-perception conflicts.”

If we follow the symmetry of cultural cognition out a bit further, moreover, we can see that there is in fact nothing inherently “egalitarian” in climate-change belief or inherently “individualistic” in climate-change skepticism.  

Dangers are selected for public concern according to the strength and direction of social criticism,” we are told.  But because what effect acknowledging a particular assertion of risk will have on the stock of competing ways of life is determined not by people's "direct examination of physical evidence" but by their understanding of social meanings (those are what determine for them what the "physical evidence" signifies), all we can say is that in the context of some particular society's "dialogue on how best to organize social relations," acceptance of human-caused climate change just happens to be understood as indicting individualism and vindicating egalitarianism.

But that could change, surely!  

The case of geoengineering shows how. 

The argument for investigating its development—one forcefully advanced by both the U.S. National Academy of Sciences and Royal Society—obviously presupposes both that human-caused climate change is happening and that it poses immense threats to human well-being.

But the cultural narrative of geoengineering is quite different from any of the other proposed responses. Whereas carbon-emission restrictions proclaim the inevitable limits of technological and commercial growth, geoengineering (much like nuclear power) asserts the potential limitlessness of the same.

“We are not like the stupid animals,” the geoengineering narrative says, “who reach the pinnacle/mode of the Malthusian curve and then come crashing down.” 

“We use our intelligence to shift the curve—deploying technology, fueled by commerce and markets, to successfully repel the very threats to our well-being that are the byproducts of commerce, markets, and technology! Brilliant!

“It used to be said,” the geoengineering narrative continues, “that the natural population density of a city like, say, London, was  shy of 4,000 persons per mile—because at around that point people would inevitably die in droves from ingesting their own shit (literally!).”   “But we invented modern systems of sewage and water treatment—we used our ingenuity to shift the curve—and now we can have cities (London: 12,000/mile; Sao Paulo 20,000/mi) many many times more dense then that!”

“Well,” the narrative concludes, “the time has come again to shift the curve, to use our ingenuity to handle the byproducts of our own ingenuity, to blast our shit into outerspace so that we don’t choke on it! Let’s go!”

This is inspiring to the individualist.

It is demoralizing to the egalitarian.  The “lesson” of climate change, for him or her, is “game over," not “more of the same”; "we told you so!," not "Yes, we can!"

The answer to our “planetary over-indulgence” is a “diet,” not “atmospheric liposuction”!

And because the cultural narrative is demoralizing to the egalitarian, geoengineering is terrifying

The risks form unforeseen and unforeseeable consequences are too high.  After all, the climate is a classic “chaotic” system—one the sheer complexity of which defies the sort of modeling that would have to be done to intelligently manage any geoengineering “fix.”

It will never ever work, and scientists like those in the NAS and Royal Society are being foolish for even proposing to investigate its risks and benefits. Indeed, it's dangerous even to discuss geoengineering, the mere mention of which threatens to dissipate the surging public demand in the U.S. and other industrialized countries to impose a carbon tax and enact other sorts of restrictions on fossil fuel use.

But what if the best available scientific evidence on climate change—including the inevitability of genuinely catastrophic climate impacts no matter what level of carbon mitigation world governments might agree to (including complete cessation of fossil fuel use tomorrow)—suggests that that nothing short of geoengineering can stave off myriad disasters, including continuing rising sea levels, violent and erratic storm activity in various parts of the world, and famine-inducing droughts over much of the rest?

Who should we expect to be skeptical of that evidence? The egalitarian communitarian or the hierarch individualist?

If in considering such evidence, the two could be observed to be trading places on whether the “scientists were biased,” “computer models could be trusted,” “the call for action is premature” etc., would that not be a nice little proof of the cultural theory of risk?

Tune in "tomorrow" & I’ll show you what the results of such an experiment looks like! 


Sunday
Feb232014

Three models of risk perception -- & their significance for self-government . . .

From Geoengineering and Climate Change Polarization: Testing a Two-channel Model of Science Communication, Ann. Am. Acad. Pol. & Soc. Sci. (in press).

Theoretical background

Three models of risk perception

The scholarly literature on risk perception and communication is dominated by two models. The first is the rational-weigher model, which posits that members of the public, in aggregate and over time, can be expected to process information about risk in a manner that promotes their expected utility (Starr 1969). The second is the irrational-weigher model, which asserts that ordinary members of the pubic lack the ability to reliably advance their expected utility because their assessment of risk information is constrained by cognitive biases and other manifestations of bounded rationality (Kahneman 2003; Sunstein 2005; Marx et al. 2007; Weber 2006).

Neither of these models cogently explains public conflict over climate change—or a host of other putative societal risks, such as nuclear power, the vaccination of teenage girls for HPV, and the removal of restrictions on carrying concealed handguns in public. Such disputes conspicuously feature partisan divisions over facts that admit of scientific investigation. Nothing in the rational-weigher model predicts that people with different values or opposing political commitments will draw radically different inferences from common information. Likewise, nothing in the irrational-weigher model suggests that people who subscribe to one set of values are any more or less bounded in their rationality than those who subscribe to any other, or that cognitive biases will produce systematic divisions of opinion of among such groups.

One explanation for such conflict is the cultural cognition thesis (CCT). CCT says that cultural values are cognitively prior to facts in public risk conflicts: as a result of a complex of interrelated psychological mechanisms, groups of individuals will credit and dismiss evidence of risk in patterns that reflect and reinforce their distinctive understandings of how society should be organized (Kahan, Braman, Cohen, Gastil & Slovic 2010; Jenkins-Smith & Herron 2009). Thus, persons with individualistic values can be expected to be relatively dismissive of environmental and technological risks, which if widely accepted would justify restricting commerce and industry, activities that people with such values hold in high regard. The same goes for individuals with hierarchical values, who see assertions of environmental risk as indictments of social elites. Individuals with egalitarian and communitarian values, in contrast, see commerce and industry as sources of unjust disparity and symbols of noxious self-seeking, and thus readily credit assertions that these activities are hazardous and therefore worthy of regulation (Douglass & Wildavsky 1982). Observational and experimental studies have linked these and comparable sets of outlooks to myriad risk controversies, including the one over climate change (Kahan 2012).

Individuals, on the CCT account, behave not as expected-utility weighers—rational or irrational—but rather as cultural evaluators of risk information (Kahan, Slovic, Braman & Gastil 2006). The beliefs any individual forms on societal risks like climate change—whether right or wrong—do not meaningfully affect his or her personal exposure to those risks. However, precisely because positions on those issues are commonly understood to cohere with allegiance to one or another cultural style, taking a position at odds with the dominant view in his or her cultural group is likely to compromise that individual’s relationship with others on whom that individual depends for emotional and material support. As individuals, citizens are thus likely to do better in their daily lives when they adopt toward putative hazards the stances that express their commitment to values that they share with others, irrespective of the fit between those beliefs and the actuarial magnitudes and probabilities of those risks.

The cultural evaluator model takes issue with the irrational-weigher assumption that popular conflict over risk stems from overreliance on heuristic forms of information processing (Lodge & Taber 2013; Sunstein 2006). Empirical evidence suggests that culturally diverse citizens are indeed reliably guided toward opposing stances by unconscious processing of cues, such as the emotional resonances of arguments and the apparent values of risk communicators (Kahan, Jenkins-Smith & Braman 2011; Jenkins-Smith & Herron 2009; Jenkins-Smith 2001).

But contrary to the picture painted by the irrational-weigher model, ordinary citizens who are equipped and disposed to appraise information in a reflective, analytic manner are not more likely to form beliefs consistent with the best available evidence on risk. Instead they often become even more culturally polarized because of the special capacity they have to search out and interpret evidence in patterns that sustain the convergence between their risk perceptions and their group identities (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).

Two channels of science communication

The rational- and irrational-weigher models of risk perception generate competing prescriptions for science communication. The former posits that individuals can be expected, eventually, to form empirically sound positions so long as they are furnished with sufficient and sufficiently accurate information (e.g., Viscusi 1983; Philipson & Posner 1993). The latter asserts that the attempts to educate the public about risk are at best futile, since the public lacks the knowledge and capacity to comprehend; at worst such efforts are self-defeating, since ordinary individuals are prone to overreact on the basis of fear and other affective influences on judgment. The better strategy is to steer risk policymaking away from democratically accountable actors to politically insulated experts and to “change the subject” when risk issues arise in public debate (Sunstein 2005, p. 125; see also Breyer 1993).

The cultural-evaluator model associated with CCT offers a more nuanced account. It recognizes that when empirical claims about societal risk become suffused with antagonistic cultural meanings, intensified efforts to disseminate sound information are unlikely to generate consensus and can even stimulate conflict.

But those instances are exceptional—indeed, pathological. There are vastly more risk issues—from the hazards of power lines to the side-effects of antibiotics to the tumor-stimulating consequences of cell phones—that avoid becoming broadly entangled with antagonistic cultural meanings. Using the same ability that they reliably employ to seek and follow expert medical treatment when they are ill or expert auto-mechanic service when their car breaks down, the vast majority of ordinary citizens can be counted on in these “normal,” non-pathological cases to discern and conform their beliefs to the best available scientific evidence (Keil 2010).

The cultural-evaluator model therefore counsels a two-channel strategy of science communication. Channel 1 is focused on information content and is informed by the best available understandings of how to convey empirically sound evidence, the basis and significance of which are readily accessible to ordinary citizens (e.g., Gigerenzer 2000; Spiegelhalter, Pearson & Short 2011). Channel 2 focuses on cultural meanings: the myriad cues—from group affinities and antipathies to positive and negative affective resonances to congenial or hostile narrative structures—that individuals unconsciously rely on to determine whether a particular stance toward a putative risk is consistent with their defining commitments. To be effective, science communication must successfully negotiate both channels. That is, in addition to furnishing individuals with valid and pertinent information about how the world works, it must avail itself of the cues necessary to assure individuals that assenting to that information will not estrange them from their communities (Kahan, Slovic, Braman & Gastil 2006; Nisbet 2009).

References 

Breyer, S.G. Breaking the Vicious Circle: Toward Effective Risk Regulation, (Harvard University Press, Cambridge, Mass., 1993).

Gigerenzer, G. Adaptive thinking: rationality in the real world, (Oxford University Press, New York, (2000).

Jenkins-Smith, H. Modeling stigma: an empirical analysis of nuclear waste images of Nevada. in Risk, media, and stigma : Understanding public challenges to modern science and technology (ed. J. Flynn, P. Slovic & H. Kunreuther) 107-132 (Earthscan, London ; Sterling, VA, 2001). 

Jenkins-Smith, H.C. & Herron, K.G. Rock and a Hard Place: Public Willingness to Trade Civil Rights and Liberties for Greater Security. Politics & Policy 37, 1095-1129 (2009).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan, D.M. Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4(2), 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahneman, D. Maps of bounded rationality: Psychology for behavioral economics. Am Econ Rev 93, 1449-1475 (2003).

Keil, F.C. The feasibility of folk science. Cognitive science 34, 826-862 (2010).

Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).

Marx, S.M., Weber, E.U., Orlove, B.S., Leiserowitz, A., Krantz, D.H., Roncoli, C. & Phillips, J. Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ Chang 17, 47-58 (2007).

Nisbet, M.C. Communicating Climate Change: Why Frames Matter for Public Engagement. Environment 51, 12-23 (2009).

Philipson, T.J. & Posner, R.A. Private choices and public health, (Harvard University Press, Cambridge, Mass., 1993).

Spiegelhalter, D., Pearson, M. & Short, I. Visualizing Uncertainty About the Future. Science 333, 1393-1400 (2011).

Starr, C. Social Benefit Versus Technological Risk. Science 165, 1232-1238 (1969).

Sunstein, C.R. Laws of fear: beyond the precautionary principle, (Cambridge University Press, Cambridge, UK ; New York, 2005).

Sunstein, C.R. Misfearing: A reply. Harvard Law Review 119, 1110-1125 (2006).

Viscusi, W.K. Risk by choice: regulating health and safety in the workplace, (Harvard University Press, Cambridge, Mass., 1983).

 

Thursday
Feb202014

Democracy and the science communication environment (lecture synopsis and slides)

Gave a talk earlier in the week at Cardiff University, the last stop on my fun "cross-cultural cultural cognition road trip." Cardiff's Understanding Risk Research Group features a '27-Yankees equivalent lineup of risk perception scholars--including Nick Pidgeon, Wouter Poortinga, Adam Corner & Lorraine Whitmarsh (I decided not to use that metaphor during my talk)--who are surrounded by top-notch sociologists studying technology and society. They also have a high-charged group of science communication scholars. I had an amaizing few days there & felt very sad when the time came to leave!
 
Slides from my talk are here. I can't quite remember how I put things, but it was something like this . . . .

0. What is this “science of science communication”?  The science of science communication can be understood as a remedy for two fallacies.

The first is res ipsa locquitur (“the thing speaks for itself”): the validity of valid science is manifest, making scientific study of it neither interesting nor necessary.

The second is ab uno disce omnes (“from one, learn all”): the scientific knowledge necessary to enable a doctor to meaningfully advise a patient on a complicated treatment decision is the same as the knowledge necessary to enable a science journalist to edify a curious member of the public, an empirical researcher to advise a policymaker, an educator to teach a high school student the theory of evolution, etc.

My remarks are mainly directed at the ab uno fallacy. I want to describe the distinctive species of SSC that is most likely to evade comprehension if one makes the mistake of thinking it’s only one thing. It is also the one that is arguably most important for the well-being of democratic society. 

The aim of this species of SSC is to protect the science communication environment.

1. The puzzle of cultural polarization over risk

Members of the public in the U.S. are highly divided on all manner of fact relating to climate change. So are members of the public in many other nations, including the UK.

There are other risks—from GM foods to nuclear power to gun ownership to vaccination against infection by HPV or other contagious diseases—that fracture the members of some of these socieites but not others.

Not to be struck by the puzzling nature of this phenomenon is to admit a deficit in curiosity. It’s not surprising at all that people with different values would disagree about what to do about a societal risk like climate change or gun possession. But there’s nothing in how much one weights equality relative to wealth, or security relative to liberty, that determines whether the earth is heating up as a result of human activity or whether permitting citizens to carry concealed handguns in public deters violent assaults.

It’s not surprising either that ordinary members of the public would disagree with one another on facts the nature of which turns on evidence as technically complex as that surrounding climate change, nuclear power, or gun control. 

But if complexity were the source of the problem, we’d expect disagreement to be randomly distributed with respect to cultural and political values, and to abate as individuals become progressively more comprehending of science. 

Not so: on the contrary, the most science comprehending members of the public are the most culturally polarized! (At least in the U.S.; I’m not aware of resarch of this sort with non-US samples & would be grateful to anyone who fills in this gap in my knowledge, if it is one).

What’s the explanation for such a peculiar distribution of beliefs—and on facts that not only admit of investigation by empirical means but that have in fact been investigated by expert empirical methods?

2. The cultural cognition thesis

The answer (or certainly a very large part of it) is cultural cognition.

Cultural cognition is a species of motivated reasoning, which refers to the tendency of people to conform their assessment of all manner of information (empirical data, logical arguments, brute sense impressions) to some goal or interest independent of forming a correct judgment. 

The cultural cognition thesis holds that people can be expected to conform their perceptions of risk and like facts to the stake they have in maintaining their connection to and status within important affinity groups.

The nature of these commitments can be measured by various means, including right-left political outlooks, but in our research we ordinarily do so with scales patterned on the “worldview” dimensions associated with Mary Douglas’s “group-grid” framework.

3. Some evidence

Studies conducted by myself and my collaborators have generated various forms of evidence in support of the cultural cognition thesis—and against rival theories that are often used to explain political conflict over societal risks.

a. Cultural cognition of scientific consensus. In one study, we performed an experiment that showed how cultural cognition influenced formation of public perceptions of what expert scientists believe. The results showed that how readily individuals of diverse cultural outlooks identified a scientist as an “expert” on climate change, nuclear power, or gun control depended on whether that scientist was depicted as espousing a position consistent with the one that prevails in the individuals’ cultural groups.

If individuals selectively credit and dismiss evidence of “expert” opinion in this fashion, they will become culturally polarized over what scientific consensus is in disputed issues.  And, indeed, the study found that in all cases the vast majority of subjects perceived that “scientific consensus” on the relevant issue—climate change, nuclear power, and gun possession—was consistent with the one that prevailed in their cultural group.

The study findings were not only consistent with the cultural cognition thesis, but also inconsistent with two alternatives.  One of these attributes political conflict over societal risks to one or another group’s hostility to science. In fact, no group subscribed to a position that it perceived to be contrary to prevailing scientific opinion.

The second alternative explanation sees one or another group as more attuned to scientific consensus than its rivals. But in fact, all groups were equally likely to view as the “consensus” among expert scientists the position contrary to the one endorsed as the “consensus” position by the U.S. National Academy of Science.

b . “Feeling” the heat—and the hurricanes, floods, tornados etc.  A common theme—indeed, the dominant for commentators who derive their explanations from syntheses of general literature rather than by original empirical research—attributes popular conflict over climate change to the public’s overreliance on heuristic, “system 1” as opposed to more reflective, dispassionate “system 2” information processing.

Those who advance this thesis typically predict that individuals will begin to revise upward their perception of the seriousness of climate change risks as they experience climate-change impacts first hand.  “Feeling” climate change, it is argued, will create the emotionally vivid impression that those who form their risk perceptions heuristically will require to start taking climate change seriously.

This prediction is also contrary to the evidence. 

It’s true that individuals’ perceptions of climate-change risk correlate with their perception that temperatures in their area have been increasing in recent years. But their perception of recent local temperatures are not predicted by what those temperatures have actually been.

Rather, they are predicted by their cultural outlooks, suggesting that individuals selectively attend to or recall weather extremes in patterns that reflect their groups’ position on climate change.

Nor do individuals appear to uniformly revise their perception of climate-change risks as they experience significant extreme-weather hardships. A CCP study of residents of southeast Florida found that the number of times a person had been forced to evacuate his or her residence, had been deprived of access to drinking water, had suffered property damage, etc. as a result of extreme weather or flooding had a very modest positive impact on the perceived risk of climate change for egalitarian communitarians—the individuals most culturally predisposed to credit evidence of climate change—but none on hierarchical individualists—those most culturally predisposed to dismiss such evidence.

In other words, people don’t “believe” in climate change when they “see” it; they see it only when they already believe it.

Cultural cognition predicts this—although so does elementary logic, since individuals who experience such events can’t “see” or “feel” the cause of them. What they see extreme weather as evidence of (climate change, tolerance of gay marriage, nothing in particular, etc.) necessarily depends on their assent to some account of how the world works that they are not themselves in a position to verify. And that’s where cultural cognition comes in.

c. Motivated system 2 reasoning. The popular “thinking fast, thinking slow” account of climate-change controversy also implies that the members of the public most disposed to use reflective “system 2” reasoning can be expected to form perceptions of climate risk more in line with scientific consensus. 

Again, the evidence does not bear this claim out.   In fact, they are the ones who are the most polarized.

That’s what the cultural cognition thesis tells us to expect.  Those who possess the skills and habits of mind necessary to critically evaluate complex arguments and data have more tools at their disposal to fit their assessments of evidence to the beliefs that are predominant in their identity-defining groups.

4. A polluted science communication environment

The spectacle of intense, persistent political conflict can easily distract us from the state of public opinion on the vast run of facts addressed by decision-relevant science. The number of risk issues that divide members of the public along cultural lines is infinitesimal in relation to the number that don’t but could.  There’s no meaningful level of political contestation over the health risks of unpasteurized milk, medical x-rays, high-power transmission lines, fluoridated water, etc. On these issues, moreover, culturally diverse individuals do tend to converge on the best-available evidence as their capacity for science comprehension increases.

The reason that these issues do not provoke controversy, moreover, is not that individuals understand the scientific evidence on the relevant risks more completely than they understand the evidence on climate change or nuclear power or the HPV vaccine or gun control.

Individuals (including scientists) align themselves appropriately with a body of decision-relevant science much vaster than they could be expected to comprehend or verify for themselves. They achieve this feat by the exercise of a reliable faculty for recognizing insights that originate in the methods that science uses to discern the truth.

Their everyday interactions with others who share their cultural worldviews are the natural domain for the use of this faculty.  Individuals spend most of their time with others who share their values; they can exchange information with them readily, without the friction that might attend interactions with individuals whose fundamental outlooks on life differ fundamentally from their own; and they are more able to read those with whom they share defining commitments, and thus to distinguish those of their number who know what they are talking about from those who don’t.

All the various affinity groups within which individuals exercise their knowledge-recognition faculties are amply stocked with people high in science comprehension, and all fully equipped with high-functioning processes for transmitting what their members collectively know of what’s become collectively known through science. So while admittedly (even regrettably) insular, the ordinary interaction of ordinary individuals with those who share their cultural worldviews generally succeeds in aligning individuals’ beliefs with the best available evidence relevant to the decisions they must make in their personal and collective lives.

This process breaks down only in the rare situation when positions on particular issues become entangled in antagonistic cultural meanings, effectively transforming them into badges of membership in and loyalty to one or another competing group. At that point, the stake that ordinary individuals have in forming and persisting in beliefs consistent with others in their group will dominate the stake they have in forming beliefs that reflect what’s known to science: what she personally believes—right or wrong—about climate change, nuclear power, and other societal risks won’t have any impact on the level of risk she or anyone else faces; the formation of a belief at odds with the one that predominates in her group, however, threatens to estrange her from those on whom her welfare—material and psychic—depends.

These antagonistic cultural meanings are a form of pollution in the science communication environment.  They literally disable the ordinarily reliable faculty ordinary individuals rely on to discern what’s known by science.

Engaging information in a manner that reflects their individual interest in forming and persisting in group-convergent beliefs, diverse citizens are less likely to converge on the best available evidence relevant to the health and well-being of them all.

The factual presuppositions of policy choices having become symbols of opposing visions of the best life, debates over risk regulation become the occasion for illiberal forms of status competition between competing cultural groups.

This polluted science communication environment is toxic for liberal democracy.

 5. The science of #scicomm environment protection

The entanglement of positions on societal risk in culturally antagonistic meanings is not a consequence of immutable natural laws or historical processes.  Specific, identifiable events—ones originating in accident and misadventure as often as strategic behavior—steer putative risk sources down this toxic path. 

By empirically investigating why a putative risk source (e.g., mad cow disease or GM foods) took this route in one nation but not another, or why two comparable risk sources (the HPV vaccine and the HBV vaccine) travelled different paths in a single nation (the U.S.), the science of science communication enables us to understand the influences that transform policy-relevant facts into divisive markers of group identity.

The same methods, moreover, can be used to control such influences.  They can be used to forecast the likely development of them in time to enable actors in government and civil society alike can act to avoid their occurrence. They can also be used to formulate and test strategies for disentangling positions from antagonistic meanings where such preventive measures fail.

The vulnerability of risk regulation to cultural contestation is not a consequence of one or another groups’ hostility to science, of citizens’ “bounded rationality,” or of some inherent drive or appetite on the part of competing groups to impose a sectarian orthodoxy on society.

It is the predictable but manageable outgrowth of the same conditions of political liberty and social pluralism that make liberal democracy distinctively congenial to the advance of scientific knowledge.

By using the hallmark methods of science to protect the science communication environment, we can assure our enjoyment of the unprecedented knowledge and freedom that are the hallmarks of liberal democracy.

 

Saturday
Feb152014

Don't be a science miscommunicator's dope (or dodo)

I've blogged about how the NRA uses the expressive "rope-a-dope" tactic to lure gun-control proponents into a style of advocacy that intenstifies cultural antagonism and thus deepens public resistance to engaging sound empirical evidence.

But the same tactic is used--the same trap laid--by enemies of constructive public engagement with decision-relevant science in other areas. Randy Olson's Flock of Dodos is a brilliant, and brilliantly entertaining, demonstration of the dynamic at work in the evolution debate.

The CCP Vaccine Risk Perception and Ad Hoc Risk Communication report warns risk communicators to avoid the "rope-a-dope" trap when engaging propagators of vaccine misinformation:

4. Risk communicators and advocates should be wary of the expressive “rope-a-dope” trap.

Cultural contestation over risks or other facts that admit of scientific inquiry is inherently disruptive of the processes by which ordinary citizens come to know what is known to science (Bolsen & Druckman 2013; Kahan 2013a). When positions become conspicuously identified with membership in identity-defining affinity groups, diverse individuals will not only be exposed disproportionately to information that reflects the position that predominates in their groups. They will also experience psychic pressures that motivate them to use their critical reasoning dispositions to persist in those positions in the face of contrary evidence (Kahan, Peters et al. 2013). For this reason, polarization will be even more intense among members of these groups whose science comprehension capacities are greatest (Kahan 2013b; Kahan, Peters et al. 2012). Because these individuals understandably play a key role in certifying what is known to science within their groups, their divisions will even more deeply entrench other group members’ commitment to the position that predominates among their peers.

Groups intent on promoting cultural polarization can actually use this dynamic to their advantage. By engaging in provocative, culturally partisan and indeed often purely symbolic attacks on positions they disagree with, interest groups can provoke their opponents into denouncing them and their positions in terms that are similarly partisan, recriminatory, and contemptuous. The spectacle of dramatic conflict is what transmits to ordinary citizens—most of whom are largely uninterested in politics and lacking strong partisan sensibilities (Zaller 1992)—that the issue in question is one on which competing positions are badges of group membership and loyalty. That signal benefits the sponsors of group conflict. Indeed, the influence that open conflict exerts on members of the opposing groups will be much stronger than any influence the sponsors of such conflict could have generated by acting alone, not to mention much stronger than the content of the arguments that either side is making.

Vaccine-risk communicators should be wary of this trap, which has been used effectively against advocates of climate science (Pielke 2013) and gun control (Kahan 2013c). Responding to misinformation necessarily elevates the profile of the misinformers. It also creates a deliberative atmosphere in which culturally partisan advocates (some out of innocent exuberance, but others out of a motivation to assimilate vaccine-risk communication into a broader portfolio of publicly arousing issues) will predictably resort to divisive attacks, ones akin, say, to those that inform the “anti-science” trope.

Conspicuous instances of conflict among groups whose members are associated with competing styles and who resort to culturally assaultive idioms are what generates in the minds of ordinary members of the public the impression that disputed positions are aligned with membership in competing groups. It was likely because so many parents of diverse outlooks learned of the HPV vaccine from exchanges like these—as opposed to exchanges with pediatricians or other health experts—that that that vaccine triggered a volume of controversy experienced by no other universal childhood or adolescent vaccine, including the HBV vaccine, which also protects people from a sexually transmitted disease and which is widely included in the schedule of vaccinations required for school enrollment in the vast majority of states (Kahan 2013a).

Steering childhood vaccines clear of the risk of this disorienting form of conflict certainly does not mean that misinformation should routinely be ignored. But it does mean that risk communicators should make a careful assessment of the need to respond, and where there is such a need how to present corrective information in a manner that is free of resonances that convey cultural partisanship.

References

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series (2013). 

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013a).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013b).

Kahan, D.M. The NRA’s "Expressive-Rope-a-Dope-Trick". in Cultural Cognition Project Blog (Sept. 3, 2013c).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013)

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Friday
Feb142014

Culture, rationality, and the tragedy of the science communications commons (lecture synopsis and slides)

Enjoyed the privilege and pleasure of delivering a lecture at the vibrant,  bustling University of Nottingham last night. The culture that I and the audience members—students and faculty from the university and curious, critical-thinking members of the larger community—share creates an affinity between us that makes us more like one another than either of us is like most of the members of our respective societies. But of course the U.S. and U.K. both enjoy public cultures that enable those who see pursuit of knowledge and exchange of ideas as the best life--a truly peculiar notion in the eyes of the vast majority--to live it. Are we not morally obliged to reciprocate this benefit? 

I wish I had spoken for less time so that I could have engaged my friends in discussion for longer.  But slides here, and a reconstruction of my fuzzy recollection of what I said below.  

0. The science communication problem.  The science communication problem refers to the failure of valid, compelling, and accessible scientific evidence to dispel public conflict over risks and other policy-relevant facts to which that evidence applies. The climate change controversy is the most conspicuous instance of this phenomenon but is not the only one: historically nuclear power and chemical pesticides generated conflicts between expert and public understandings of risk; today disputes of GM foods in Europe and the HPV vaccine in the U.S. feature forms and levels of political controversy over facts that admit of empirical investigation as well. 

Of course, no one should find it surprising that risk regulation and like forms of science-informed policymaking are politically contentious. Facts do not determine what to do; that depends on judgments of value, which naturally, appropriately vary among reasoning people in a free society. 

But values don’t determine facts either.  The answer to the question whether the earth’s temperature has increased in recent decades as a result of human activity turns on empirical evidence the proper understanding of which is the same whether one is an “individualist” or an “egalitarian,” a “liberal” or a “conservative,” a  “Republican” or a “Democrat.”

Accordingly, whatever position one thinks the best evidence supports, one should be puzzled by the science communication problem.  Indeed, one should be puzzled even if one thinks the best available evidence doesn’t clearly support any particular position: there’s no reason why people of diverse values should be unable to recognize that, much less for them to form positions in such circumstances that so strongly correlate with their views about the best way to live. 

So what explains the science communication problem? And what, if anything, can be done about it?

I will describe evidence relating to two hypothesized explanations for the science communication problem, and then advance a set of normative and prescriptive claims based on what I think (for the time being, of course) is the account that the evidence most compellingly supports.

1. 2 hypotheses & some evidence.  The dominant account of the science communication problem among both the academic and the popular commentators (including the many popular commentators who pose as scholarly ones) is the “public irrationality thesis” (PIT).  PIT is related to the often-derided “knowledge deficit” theory—a position I’m not actually sure any serious scholar has ever advanced—but in fact puts more emphasis on the public’s capacity to give proper effect to scientific evidence of risk. Building on Kahneman’s popularization of the “system 1/system 2” conception of dual-process reasoning, PIT attributes public controversy over climate change and other societal risks to the public’s excessive reliance on unconscious, affect-driven heuristics (“system 1”) and its inability to engage in the conscious, effortful, analytic analysis (“system 2”) form that characterizes expert risk analysis.

If PIT proponents were trying to connect their understandeing to the evolving empirical evidence on public risk perceptions, they’d surely be qualifying their incessant, repitious, formulaic espousal of it. Those members of the public who display the greatest degree of “system 2” reasoning ability—are no more likely to hold views consistent with scientific consensus. Indeed, they are even more likely to be culturally and ideologically polarized than members of the public who are most disposed to use “system 1” heuristic forms of reasoning.

A second explanation for the science communication problem is the “cultural cognition thesis” (CCT).  CCT posits that the stake individuals have in their status in affinity groups whose members share basic understandings of the best life can be expected to interact with the various psychological processes by which they make sense of evidence of risk.  Supporting evidence includes studies showing that individuals much more readily perceive scientists to be “experts” worthy of deference on disputed societal risks when those scientists support than when they oppose the position that is predominant in individuals’ cultural group.

This selectivity can be expected to generate diverging perceptions of what expert consensus is on disputed risks.  And, indeed, empirical evidence confirms this prediction.  No cultural group believes that the position that is dominant in its group is contrary to scientific consensus—and across the run of disputed societal risks, all of the groups can be shown to be poorly informed on the state of expert opinion.

The magnification of polarization associated with the disposition to engage in “system 2” forms of information processing also fits CCT.  Individuals who are adept at engaging empirical evidence have a resource that those who must rely more on “system 1” substitutes lack for ferreting out evidence that supports their group’s position and rationalizing away the evidence that doesn’t.

2. The tragedy of the science communications commons. PIT, then, has matters essentially upside down. The source of the science communication problem is not too little rationality on the part of the public but rather too much.  The behavior of an ordinary individual as a consumer, a voter, or an advocate, etc., can have no material impact on the level of risk that person or anyone else faces from climate change. But if he or she forms a position on that issue that is out of keeping with the one that predominates in that person's group, he or she faces a considerable risk of estrangement from communities vital to his or her psychic and material well-being.  Under these conditions, a rational actor can be expected to attend to information in a manner that is geared more reliably to forming group-congruent than science-congruent risk perceptions.  And those who are highest in critical reasoning dispositions will do an even better job than those whose “bounded rationality” leave them unable to recognize the evidence that supports their groups’ position or to resist the evidence that  undermines it.

But as individually rational as this form of information processing is, it is collectively irrational for everyone to engage in it simultaneously. For in that case, the members of a self-governing society are less likely to converge or converge as quickly as they otherwise would on the best available evidence.

Yet even that won’t make it any more rational for an individual to attend to information in a manner reliably geared to forming science- as opposed to group-congruent beliefs—because, again, nothing he or she does based on a “correct” understanding will make any difference anyway.

This misalignment of individual and collective interests in the formation of risk perceptions consistent with the best available evidence is the tragedy of the science communications commons.

3. A polluted science communication environment. The signature attributes of the science communication problem—the correlation between perceptions of risk and group-defining values, and the magnification of this effect by greater reasoning proficiency—is pathological.  It is not only harmful, but unusual.  The number of societal risks that reflect this pattern relative to the number that do not is tiny.

In the cases in which diverse members of the public converge on the best available evidence, the reason is not that they genuinely comprehend that evidence. Individuals must, not only to live well but simply to live, accept as known by science much more than they could ever make sense of, much less verify, on their own. 

Ordinary individuals manage to align themselves appropriately with decision-relevant science essential to their individual and collective well-being not by becoming experts in substantive areas of knowledge but by becoming experts in identifying who knows what about what.  Nullius in verba—or “take no one’s word for it,” the motto of the Royal Society—is charming but silly if taken literally.  What’s essential is to take the word only of those whose knowledge has been attained by the methods of ascertaining knowledge distinctive of science.

The remarkable ability that ordinary members of the public—ones of diverse reasoning dispositions as well as diverse values—to reliably identify who knows what about what breaks down, however, when positions on issues become entangled in meanings that transform them into symbols of group identity and loyalty.  At that point, the stake individuals have in forming group-congruent beliefs will dominate the stake they have in forming science-congruent ones.

Such meanings, then, are a kind of pollution in the science communication environment. They disable the normally reliable faculties that individuals use to ascertain what is known to science.

4. “. . . a new political science . . .” (a) Risks are not born with antagonistic cultural meanings but rather acquire them through one or another set of events that might well have turned out otherwise.

It wasn’t inevitable, for example, that the HPV vaccine would acquire the divisive association with contested norms on gender, sexuality, and parental autonomy that polarized opposing groups’ perceptions of its risks and benefits in the U.S. The HBV vaccine also confers immunity from a sexually transmitted disease that causes cancer (hepatitis-b), and the CDC’s recommendation to add it to the schedule of vaccinations required as a condition of middle school enrollment generated no meaningful controversy among culturally diverse citizens—over 90% of whose children received the shot every year during which the states were embroiled in controversy over making the HPV shot mandatory.

The antagonistic cultural meanings that fuel political controversy over GM foods in Europe aren’t inevitable either.  They are completely absent in the U.S.

(b) The same methods that scholars of public risk perception use to make sense of these differences, moreover, can be used to forecast the conditions that make one or another emerging technology—such as synthetic biology or nanotechnology—vulnerable to becoming suffused with such meanings. Action can then be taken to steer these technologies down a safer path—not for the purpose of making members of the public believe they are or aren’t genuinely hazardous, but rather for the purpose of assuring that members of the public will reliably recognize the best available evidence on exactly that.

Indeed,  the danger of cultural polarization associated with the path the HPV vaccine traveled in being introduced to the public was forecast with such methods, which corroborated the warnings of numerous health professionals and others.

This evidence wasn’t rejected; it simply wasn’t considered. There’s was no mechanism in any part of the drug-regulatory approval process for anyone to present, or any institution to act, on evidence on the hazards associated with fast-track approval of a girls-only STD vaccine combined with a high-profile nationwide campaign in state legislatures to make the vaccine mandatory.

(c)  Without systematic procedures to acquire and intelligently use scientific knowledge to protect the science communication environment, its contamination is inevitable.

The inevitable danger of such conflicts is built into the constitution of the Liberal Republic of Science. The same institutions and culture of political freedom that fuel the engine of competitive conjecture and refutation that drives science assure—mandate—that there by no single institution endowed with the authority to certify what is known to science. But the immensity and complexity of what is known cannot certify or announce itself; the idea that it can is the sentimental, sociologically and epistemologically naïve variant of nullius in verba.

In the Open Society there will be a plurality of certifiers—in the form of communities of free individuals associating with others with whom they have converged in the exercise of their reason on a shared understanding of the best way to live. 

This dynamic, unregulated, pluralistic system of certification of what is known to science works in the vast run of cases!

Yet it is inevitable—statistically!--that it sometimes won’t: the sheer enormity of things that science can discern in a free society & the non-zero probability that any one of those can become entangled in antagonistic cultural meanings mean that risk regulation will remain a permanent site of illiberal forms of status competition among the plurality of cultural groups in which free, reasoning individuals form their understanding of what is known to science. This is Popper’s revenge . . . .

It is foolish (an embarrassing display of shallow thinking combined with indulgence of tribal chauvinism) to blame “profit-mongering corporations” or “political extremists” for disasters like the one that occurred with the introduction of the HPV vaccine in the U.S. ”  Until we—the citizens of the Liberal Republic of Science—use our reason and exercise our will to create a common culture of evidence-based science communication dedicated to protecting the science communication environment, we are destined to suffer the reason-effacing, welfare-enervating, freedom-annihilating spectacle of cultural conflict over risk.

(d) Writing at the birth of liberal democracy, Tocqueville famously remarked the need for “a new political science for a world itself quite new.”

Today we need a new political science—a science of science communication –dedicated to protecting the process by which plural communities of free and reasoning individuals certify to themselves what is known by science.

We must use our reason to protect the historic condition of freedom and the unprecedented immensity of collective knowledge that are the reciprocal defining features of the Liberal Republic of Science. 

Monday
Feb102014

"Motivated Numeracy": What's the Point? (lecture synopsis, slides)

Gave lecture /workshop today at Cambridge. It was advertised as being a session on the CCP working paper, “Motivated Numeracy and Enlightened Self-Government.”  It was—but I added some context/motivation.  Outline of what I remember saying below & slides here.  Lots of great questions & comments after—on issues from the influence of cultural cognition on scientists to the relative potential impact of fear & curiosity in fortifying critical reasoning dispositions!

I. What’s the point? The “Motivated Numeracy” study is the latest (more or less) installment in a series intended to make sense of and maybe help solve the science communication problem. The “science communication problem” refers to the failure of valid, compelling, and widely accessible scientific evidence to dispel public controversy over risks and other policy-relevant facts. Climate change is a salient instance of the problem but is not the only one. The conflict between public and expert views on the safety of nuclear power once attracted nearly as much attention. There are other contemporary instances of the science communication problem, too, including the controversy over mandatory HPV vaccination in the US and GM foods in Europe (but actually not in the US).

II.  Two theories. What accounts for the science communication problem?  One explanation, the “public irrationality thesis,” attributes public controversy over climate change and other societal risks to the public’s limited capacity to comprehend science. The problem is only part one of a “knowledge deficit”; more important is a deficit in critical reasoning. Members of the public rely excessively on largely unconscious, heuristic-driven forms of information processing and thus overestimate more emotionally compelling dangers—such as terrorism—relative to less evocative ones like climate change, which the conscious, analytic modes of risk analysis used by experts show are even more consequential.  Informed by Kahneman’s “system 1/system2” conception of dual process reasoning, PIT is more or less the dominant account in popular and academic commentary.

Another account of the science communication problem is the “cultural cognition thesis.” Cultural cognition involves the tendency of individuals to conform their perceptions of risk and other policy-relevant facts to the positions that are dominant in the affinity groups that play a central role in organizing their day-to-day lives.  As a species of motivated reasoning, CCT is distinguished by its use of Mary Douglas’s “cultural worldview” framework to specify the core commitments of the affinity groups that shape information processing.  CCT is distinguished from  other conceptions of the “cultural theory of risk” by its attempt to root the influence that group commitments of this sort play in shaping perceptions of risk in cognitive mechanisms that admit of empirical investigation by the methods featured in social psychology and related disciplines.

III.   Three studies. Motivated Numeracy describes the third in a series of studies dedicated to investigating the relationship between PIT and CCT.  The first study, an observational one that examined the climate-change risk perceptions of a large nationally representative sample, made two findings at odds with PIT. 

The first finding had to do with the impact of science comprehension on the perceived risk of climate change. If, as PIT asserts, the reason that the average member of the public is less concerned with climate change risks than he or she should be is that he or she lacks the capacity to make sense of scientific evidence, than one would expect people to become more concerned about climate change as their science literacy and quantitative reasoning abilities increase.  But this isn’t so: the study found that the impact of these attributes on climate change risk was close to zero for the sample as a whole.

The second finding contrary to PIT had to do with the relationship between science comprehension and cultural cognition.  PIT views cultural cognition as just another heuristic substitute for the capacity to understand and give proper effect to scientific evidence of risk: those who can are reliably guided by the best available evidence; those who can’t must with their gut, which is filled with crap like “what do people like me believe?”  If this position is correct, one would expect the risk perceptions of culturally diverse individuals to be progressively less correlated with their groups and more correlated across groups as their science comprehension capacity increases.

But not so.  On the contrary, cultural polarization, the first study found, increases as science comprehension does.

Why? The CCT explanation is that individuals are using their knowledge of and capacity to reason about scientific evidence to form and persist in beliefs that reflect their group identities.

The second study used experimental methods to test this hypothesis.  The study found, consistent with CCT, that individuals who display the strongest disposition for cognitive reflection—a habit of mind associated with conscious, effortful system 2 reasoning—are more likely to discern the ideological implications of conceptually complicated information and selectively credit or reject it depending on its congeniality to their cultural outlooks.

The third and final study—the one the results of which are reported in “Motivated Numeracy”—likewise used an experimental design to assess whether individuals can be expected to use their critical reasoning dispositions in a manner that promotes identity-congruent rather than truth-congruent beliefs.  The study compared the interaction of right-left ideology (an alternative way to measure the group affinities that generate cultural cognition) with numeracy, a quantitative reasoning capacity associated with “system 2” information processing. 

Subjects were instructed to examine a problem understood to be a predictor of their vulnerability to a defective heuristic alternative to the assessment of covariance.  The problem involved assessing whether the results of an experiment supported or negated a hypothesis.  For subjects in the “control group,” this problem was styled as one involving the effectiveness of a new skin-rash treatment.  As expected, only the most highly numerate subjects were likely to correctly interpret the experimental data.

Another version of the problem was styled as an experiment involving the effectiveness of a ban on carrying concealed weapons.  In this condition, high-numerate subjects again did much better than low-numerate ones but only when the data properly construed generated an ideologically congenial result. When the data, properly construed, supported an ideological noncongenial result, high numerate subjects latched onto the incorrect but ideologically satisfying heuristic alternative to the logical analysis required to solve the problem correctly.

Because high-numeracy subjects used their quantitative reasoning powers selectively to credit evidence that low-numeracy subjects could not reliably interpret, high-numeracy subjects ended up more likely on average to disagree than low-numeracy ones.  The impact of science comprehension in magnifying cultural polarization on climate change is consistent with exactly this pattern of ideologically opportunistic critical reasoning.

IV. One synthesis.  The studies investigating the interaction of PIT and CCT support (provisionally, as always!) a cluster of interrelated descriptive, normative, and prescriptive conclusions. 

 A. The tragedy of the science communication commons. The science communication problem is a result not of too little rationality but rather too much.  Because the beliefs and actions of any ordinary individual member of the public can’t affect climate change, neither she nor anyone she cares about will be put at risk if she makes a mistake in interpreting the best available evidence.  But if such a person forms a position that is out of keeping with the dominant one in her affinity group, the consequences—in estrangement from those she depends on for support—can be extremely detrimental.  It thus is individually rational for individuals to attend to information on societal risks that more reliably connects their beliefs to those shared by others with their defining outlooks than to the best available evidence.  The more proficient they are in reasoning about scientific evidence, moreover, the more successful they’ll be in forming and persisting in such beliefs.

Such behavior, however, is collectively irrational. If all individuals pursue it simultaneously, they will not converge or converge as quickly as they should on valid evidence essential to their welfare.  Yet this predictable consequence will not change the psychic incentive that any individual faces to form group- rather than truth-convergent beliefs.

The science communication problem thus involves a distinctive form of collective action problem—a tragedy of the science communications commons.

B. Pathological meanings. The signature attributes of the science communication problem—cultural polarization magnified by science comprehension—are not normal. The number of risk perceptions and like beliefs that display this pattern relative to the number that do not is tiny. On issues from fluoridation of water to the safety of medical x-rays, the most science comprehending individuals do converge, pulling along those who share their cultural outlooks.  This process of knowledge transmission breaks down only when positions on disputed issues become symbols of membership in and loyalty to competing groups—at which point the stake ordinary individuals will have in forming group-convergent beliefs will systematically dominate the stake they have in forming truth-congruent ones. 

This sort of entanglement of risk perceptions and culturally antagonistic meanings is a pathology—both in the sense of being harmful and in the sense of being unusual or opposed to the normal, healthy functioning of collective belief formation.

C. “Scicomm environment protection” as a public good.  The health of a democratic society depends on the quality of the science communication environment just as the health of its members depends on the quality of the natural one.  Antagonistic cultural meanings are a form of pollution in the science communication environment that disables the exercise of the rational faculties that ordinary citizens normally and reliably use to discern what’s known to science. Protecting the science communication environment from this toxin is a public good essential to enlightened self-government. 

By  using reason, we can protect reason from the distinctive threats that the science communication problem comprises.

Saturday
Feb082014

Cross-cultural cultural cognition road trip

Here's my schedule for next week and a half -- or at least parts of it.  

 

Stop by if in the neighorhood -- otherwise I'll send postcard reports now & again!

(Actually, I'm surprised that I'm giving the same talk at Cardiff & Nottingham--but I doubt that I really will!)

Wednesday
Feb052014

Science journalists: Ask not what the science of science communication can do for you . . . 

A reflective correspondent & friend wrote to me to ask what I made of the relative inattention of science journalists to the empirical study of science communication--& what might be done to remedy this.  She had many great ideas for how to make such work more familiar and accessible to them.  I had a somewhat different, but I think complementary reaction:

I think it is unsurprising how infrequently empirical research is featured in social media and similar fora in which science journalists exchange ideas.

The explanation, moreover, isn't merely that how to communicate to curious members of the public is only 1 of the n things that science of science communication studies. It's that those who are engaged in scientifically studying science communication -- including the sorts science journalists do -- aren't trying to answer the questions that journalists most often are, and should be, asking.  

The journalists' questions relate to their own craft norms -- the professional understandings that they absorb and generate and transmit and that guide and animate them.  They argue about various ones of them all the time, in many cases persistently (or at least intermittently; they have jobs—very interesting ones!) over long periods of time.

That means that they have questions that in the judgment of those endowed with the requisite, experience-informed professional judgment admit of more than one plausible (but not, the debate presupposes, more than one correct or best) answer.  

Under those circumstances, arguments will be interminable and make no progress. Evidence is needed -- not as a substitute for the exercise of professional judgment but as raw material for it to operate on.  

Well, very very few (maybe zero) scholars are using empirical methods to answer questions of consequence to the quality and evolution of science journalism's' craft norms.  

Most “science of #scicomm” scholars, of course, aren't studying science journalism at all.  

Others actually are-- but to answer questions that are parts of the scholarly conversations those researchers are part of.  They have converged on (or joined) collective inquiries into how one or another general mechanism—cognitive, political, or both—operate to shape the path of scientific information through the media and to the public.  Their research (much of which is excellent!) is, nearly always, trying to answer questions that admit of more than one plausible (but not more than one correct or best) answer about those processes—not about how science journalists can be excellent science journalists.

Maybe sometimes these scholars mistakenly think that what they are studying when they examine these more general dynamics of communication supplies the "answers" to the questions science journalists pose about their own craft norms. Other times they present their work this way knowing full well that it is a mistake (it's a very disturbing spectacle when they do).

In either case, science journalists react negatively -- "that's ridiculous" or (in a refrain that becomes a chorus after an event like NAS “science of #scicomm” colloquia) "that's completely irrelevant to what we do; I've not learned a thing!"  ...

Well, the problem actually isn't in the researchers here; it's in the science journalists!

The mistake is in part for them to think that "everything is about them": the science of science communication isn't one thing—it’s 7 (± 2).  

But even more fundamentally, it is a mistake for the science journalists to think that anyone besides them can be expected to create the scientific insight that is relevant to their craft!

No one else knows (or likely genuinely cares: nonjounralists don't even know enough to care) what the empirical questions of consequence to science journalism's craft norms are. No one else can reliably recognize the form of evidence that helps professional conversation about those questions to advance; only those with the sense of the professional science journalists can.

This isn't to say that individual journalists must start designing studies and collecting data.  Rather it is to say that they must exercise control over research using empirical methods so that it in fact is designed to address questions of consequence to them and uses designs that can support inferences relevant to the sorts of questions experienced science journalists recognize as admitting of more than one plausible (but not more than one correct or best) answer.  

Science journalists will often observe, correctly, that “science of #scicomm” scholars' work on general mechanisms are generating insights of indisputable relevance to their craft.  But the journalists--not the scholars--will know when that's so.  

In that situation, moreover, science journalists will be filled with hypotheses--ones that are concrete and relevant to those who share their situation sense—about how those mechanisms might interact with their professional craft norms.

Even if they did not themselves create the studies, they will recognize when one designed to test such a hypothesis is genuinely capable of supporting inferences on the basis of which they can will know more than they otherwise would have

They are the ones, then, who must direct the empirical enterprise that is the science of science communication for science journalists.

How?

There are an infinite number of ways -- but none of them consists in passively consuming journal articles.

Here as in the other practical domains in which a science of science communication is needed, the answer of the thoughtful and honest scholar who actually wants to help when asked (over & over) by communicators to "so what should we do" is, "You tell me -- and I will help by measuring what you confirm for me is the right thing!"

Page 1 ... 4 5 6 7 8 ... 25 Next 20 Entries »