follow CCP

Recent blog entries
Thursday
Apr242014

Still more evidence of my preternatural ability to change people's minds: my refutation of Krugman's critique of Klein's article convinces Klein that Krugman's critique was right

That's not Harmon Killebrew, is it?! Nah...Huh.

Well, I actually agree 70% w/ what Klein says; once I explain why, I predict Klein will thoughtfully disagree -- and end up more-or-less where I was in my post on Krugman's "symmetry proof."

But I don't have time to go into this now (am busy w/ field experiments aimed at counteracting the motivated reasoning of cultural anti-cat zealots).  Will write something on this "tomorrow." 

In meantime, maybe someone else will explain why I was 100% right (everyone who commented on the Krugman post definitely felt that way).

Wednesday
Apr232014

What you "believe" about climate change doesn't reflect what you know; it expresses *who you are*

More or less the remarks I delivered yesterday at Earthday "Climate teach in/out" at Yale University:

I study risk perception and science communication.

I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. 

It's this:

What people “believe” about global warming doesn’t reflect what they know; it expresses who they are.

Accordingly, if you want to promote constructive public engagement with the best available evidence, you have to change the meaning of the climate change.

You have to disentangle positions on it from opposing cultural identities, so that people aren't put to a choice between freely appraising the evidence and being loyal to their defining commitments.

I’ll elaborate, but for a second just forget climate change, and consider another culturally polarizing science issue: evolution.

About every two years, a major polling organization like Gallup issues a public opinion survey showing that approximately 50% of Americans “don’t believe in evolution.” 

Pollsters issue these surveys at two-year intervals because apparently that’s how long it takes people to forget that they’ve already been told this dozens of times.  Or in any case, every time such a poll is released, the media and blogosphere is filled with expressions of shock, incomprehension, and dismay.

“What the hell is wrong with our society’s science education system?,” the hand-wringing, hair-pulling commentators ask.

Well, no doubt a lot.

But if you think the proportion of survey respondents who say they “believe in evolution” is an indicator of the quality of the science education that people are receiving in the U.S., you are misinformed.

Do you know what the correlation is between saying “I believe in evolution” and possessing even a basic understanding of “natural selection,” “random mutation,” and “genetic variance”—the core elements of the modern synthesis in evolutionary science?

Zero.

Those who say they “do believe” are no more likely to be able to give a high-school biology-exam-quality account of how evolution works than those who say they “don’t.”

In a controversial decision in 2010, the National Science Foundation in fact proposed removing from its standard science-literacy test the true-false question “human beings developed from an earlier species of animals.”

The reason is that giving the correct answer to that question doesn’t cohere with giving the right answer to the other questions in NSF’s science-literacy inventory.

What that tells you, if you understand test-question validity, is that the evolution item isn’t measuring the same thing as the other science-literacy items.

Answers to those other questions do cohere with one another, which is how one can be confident they are all validly and reliably measuring how much science knowledge that person has acquired.

But what the NSF “evolution” item is measuring, researchers have concluded, is test takers’ cultural identities, and in particular the significance of religiosity in their lives.

What’s more, the impact of science literacy on the likelihood that people will say they “believe in evolution” is in fact highly conditional on their identity: as their level of science comprehension increases, individuals with a highly secular identity become more likely to say “they believe” in evolution; but as those with a highly religious identity become more science literate, in contrast, they become even more likely to say they don’t.

What you “believe” about evolution, in sum, does not reflect what you know about science—in general, or in regard to the natural history of human beings.

Rather it expresses who you are.

Okay, well, exactly the same thing is true on climate change.

You’ve all seen the polls, I’m sure, showing the astonishing degree of political polarization on “belief” “human-caused” global warming.

Well, a Pew Poll last spring asked a nationally represented sample, “What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it carbon dioxide, hydrogen, helium, or radon?”

Approximately 60% got the right answer to that question.

And there was zero correlation between getting it right and being a Democrat or Republican.

The percentage of Democrats who say they “believe” in global warming is substantially higher than 65%: it’s over 80%, which means that a good number of Democrats who say they “believe” in global warming don’t understand the most basic of all facts known to climate science.

The percentage of Republicans who say they don’t believe in global warming is a lot lower than 65%. Only about 25% say they believe human beings have caused global temperatures to rise in recent decades, according to Pew and other researchers. 

That means that a large fraction of the Republicans who tell pollsters they “don’t believe” in human-caused global warming do in fact know the most important thing there is to understand about climate change: that adding carbon to the atmosphere causes the temperature of the earth to increase.

Do you know what the correlation is between science literacy and “belief” in human-caused global warming?

You get half credit for saying zero.

That’s the right answer for a nationally representative sample as a whole.

But it’s a mistake to answer the question without dividing the sample up along cultural or comparable lines: as their score on one or another measure of science comprehension goes up, Democrats become more likely, and Republicans less, to say they “believe” in human-caused global warming.

Like saying “I do/don’t believe in evolution,” saying I “do/don’t believe in climate change” doesn’t convey what you know about science—generally, or in relation to the climate.

It expresses who you are.

Al Gore has described the climate change debate as a “struggle for the soul of America.

He’s right.

But that’s exactly the problem.  Because in “battles for the soul” of America, the stake that culturally diverse individual have in forming beliefs consistent with their group identity dominates the stake they have in forming beliefs that fit the best available evidence.

In saying that, moreover, I’m not talking about whatever interest people have in securing comfortable accommodations in the afterlife. I’m focused entirely on the here and now.

Look: What an ordinary individual believes about the “facts” on climate change has no impact on the climate.

What he or she does as a consumer, as a voter, or as a participant in public debate is just too inconsequential to have an impact.

No mistake that individual makes about the science on climate change, then, is going to affect the risk posed by global warming for him or her or for anyone else that person cares about.

But if he or she takes the “wrong” position in relation to his or her cultural group, the result could be devastating for her, given what climate change now signifies about one’s membership in and loyalty to opposing cultural groups.

It could drive a wedge—material, emotional, and psychological—between individual the people whose support are indispensable to his or her well-being.

In these circumstances, we should expect a rational person to engage information in a manner geared to forming and persisting in positions that are dominant within their cultural groups. And the better they are at making sense of complex information—the more science comprehending they are –the better they’ll do at that. 

That’s what we see in lab experiments.  And it’s why we see polarization on global warming intensifying in step with science literacy in the real world.

But while that’s the rational way for people to engage information as individuals, given what climate change signifies about their cultural identities, it’s a disaster for them collectively.  Because if everyone does this at the same time, members of a culturally diverse democratic society are less likely to converge on scientific evidence that is crucial to the welfare of all of them.

And yet that by itself doesn’t make it any less rational for individuals to attend to information in a manner that reliably connects them to the position that is dominant in their group.

This is a tragedy of the commons problem—a tragedy of the science communications commons.

If we want to overcome it, then we must disentangle competing positions on climate change from opposing cultural identities, so that culturally pluralistic citizens aren’t put in the position of having to choose between knowing what’s known to science and being who they are.

Only that will dissolve the conflict citizens now face between their personal incentive to form identity-consistent beliefs and the collective one they have in recognizing and giving effect to the best available evidence.

Science educators, by the way, have already figured this out about evolution. They’ve shown you can in fact teach the elements of the modern synthesis-- random mutation, genetic variance and natural selection—just as readily to students whose identities cohere with saying they “don’t believe” in evolution as you can to students whose identities cohere with saying they do. You just can’t expect the former to “I believe in evolution” after.

Indeed, you must take pains not to confuse understanding evolutionary science with the “pledge of cultural allegiance” that “I believe in evolution” has become.

You must remove from the education environment the toxic cultural meanings that make answers to that question badges of membership in and loyalty to one’s cultural group.  The meanings that fuel the pathetic spectacle of hand-wringing and hair-pulling that occurs every time Gallup or another organization issues its “do you believe in evolution” survey results.

All the diverse groups that make up our pluralistic democracy are amply stocked with science knowledge.

They are amply stocked with public spirit too. 

That means you, as a science communicator, can enable these citizens to converge on the best available evidence on climate change.

But to do it, you must banish from the science communication environment the culturally antagonistic meanings with which positions on that issue have become entangled—so that citizens can think and reason for themselves free of the distorting impact of identity-protective cognition.

If you want to know what that sort of science communication environment looks like, I can tell you where you can see it: in Florida, where all 7 members of the Monroe County Board of Commissioners -- 4 Democrats, 3 Republicans -- voted unanimously to join Broward County (predominantly Democratic), Monroe County (predominantly Republican), and Miami-Dade County (predominantly Republican) in approving the Southeast Climate Compact Action plan, which, I quote from the Palm Beach County Board summary, “includes 110 adaptation and mitigation strategies for addressing seal-level risk and other climate issues within the region.”

I’ll tell you another thing about what you’ll see if you make this trip: the culturally pluralistic, and effective form of science communication happening in southeast Florida doesn’t look anything  like the culturally assaultive "us-vs-them" YouTube videos and prefabricated internet comments with which Climate Reality and Organizing for American are flooding national discourse.

And if you want to improve public engagement with climate science in the United States, the fact that advocates as high profile and as highly funded as that still haven’t figured out the single most important lesson to be learned from the science of science communication should make you very sad.

Sunday
Apr202014

No, I don't think "cultural cognition is a bad thing"; I think a *polluted science communication environment* is & we should be using genuine evidence-based field communication to address the problem

Stenton Benjamin Danielson has a characteristically thoughtful post, 95% of which I agree with, on cultural cognition, "public opinion," and promoting constructive public engagement with climate science.  But of course the 5%-- which has to do with whether I think "cultural cognition" is a "bad thing" that is to be overcome rather than a dynamic to be deployed to promote such engagement -- sticks in my craw!  Maybe this response will get us closer to 100% agreement--if not by moving him a full 5% in my direction, then maybe by  provoking him to elaborate & thereby move me some fraction of the remainder toward his point of view.

 So read what he says.  Then read this:

Part of the problem, I'm sure, is that I'm an imperfect communicator.

Another is the infeasibility of saying everything one believes every time one says anything.

But it is simply not the case that I view

cultural cognition as unreservedly bad -- a sort of disease or pollution in our debate about an issue, something to be prevented or neutralized whenever possible so that we can make rational assessments of the evidence.

On the contrary, I view it is an indispensable element of rational thought, one that contributes in a fundamental way to the capacity of individuals to participate in, and thus extend, collective knowledge. See generally:  

Cultural cognition conduces to persistent states of public controversy over what's known only in a polluted science communication environment: one in which antagonistic cultural meanings become attached to positions on risk and policy-relevant facts, and transform them into badges of membership in opposing cultural groups.  

That's not normal.  It is a pathology that disables rational thought precisely because it disconnects cultural cognition from discernment of the best available evidence.

We can treat this pathology, and better still avoid the occurrence of it, through evidence-based science-communication-environment protection practices

See generally:  

I also agree, by the way, that "messaging" campaigns aimed at influencing "public opinion" generally are an absurd waste of time, not to mention waste of the money of those eager to support climate-science communication efforts.  This approach to "science communication" not only reflects a psychologically unrealistic account of how people come to know what's known by science but betrays an elementary-school level of comprehension of basic principles of political economy

Don't "message" people with "struggle for the soul of America" appeals. 

Show them that engaging climate science is "normal" by enabling them to see that people they recognize as competent and informed are using it to guide their practical decisions.  That is how ordinary people -- very rationally -- recognize how to orient themselves appropriately with the best available evidence on all manner of issues. 

Understanding the contribution that cultural cognition makes to individuals' rational apprehension of what is known is, I believe, is indispensable to that strategy for promoting constructive public engagement with climate science.  I'm glad to see that you agree with me on that -- even if you hadn't discerned that I agree with you! 

Those "risk experts" who want to contribute, moreover, should stop telling just-so stories-- give up the facile "take-'biases'-&-'heuristics'-literature-add-water-&-stir" form of "instant decision science"-- and go to the places where real people are trying to figure out how to use climate science to make their lives better.

Go there and genuinely help them by systematically testing their experience-informed hypotheses about how to reproduce in the world the sorts of things that experimental methods using cultural cognition and other theories suggest will improve public engagement with climate science.

We don't need more stylized lab experiments that try to convince us that things that real-world evidence manifestly show won't work actually will if we just keep doing them (followed when they don't by whinging about "the forces of evil" who--as was perfectly foreseeable--told members of the public whom you were targetting not to believe your "message").

Climate scientists update their models to reflect ten years of data.  Climate advocates should too.  

 

 

Friday
Apr182014

Want to improve climate-science communication (I mean really, seriously)? Stop telling just-so stories & conducting "messaging" experiments on MTurk workers & female NYU undergraduates & use genuine evidence-based methods in field settings instead

From Kahan, D., "Making Climate Science Communication Evidence-based—All the Way Down," in Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow, pp. 203-21. (Routledge Press, 2014):

a. Methods. In my view, both making use of and enlarging our knowledge of climate science communication requires making a transition from lab models to field experiments. The research that I adverted to on strategies for counteracting motivated reasoning consist of simplified and stylized experiments administered face-to-face or on-line to general population samples. The best studies build explicitly on previous research—much of it also consisting in stylized experiments—that have generated information about the nature of the motivating group dispositions and the specific cognitive mechanisms through which they operate. They then formulate and test conjectures about how devices already familiar to decision science—including message framing, in-group information sources, identity-affirmation, and narrative—might be adapted to avoid triggering these mechanisms when communicating with these groups.[1]

But such studies do not in themselves generate useable communication materials. They are only models of how materials that reflect their essential characteristics might work. Experimental models of this type play a critical role in the advancement of science communication knowledge: by silencing the cacophony of real-world influences that operate independently of anyone’s control, they make it possible for researchers to isolate and manipulate mechanisms of interest, and thus draw confident inferences about their significance, or lack thereof. They are thus ideally suited to reducing the class of the merely plausible strategies to ones that communicators can have an empirically justified conviction are likely to have an impact. But one can’t then take the stimulus materials used in such experiments and send them to people in the mail or show them on television and imagine that they will have an effect.

Communicators are relying on a bad model if they expect lab researchers to supply them with a bounty of ready-to use strategies. The researchers have furnished them something else: a reliable map of where to look for them. Such a map will (it is hoped) spare the communicators from wasting their time searching for nonexistent buried treasure. But the communicators will still have to dig, making and acting on informed judgments about what sorts of real materials they believe might reproduce these effects outside the lab in the real-world contexts in which they are working.

The communicators, moreover, are the only ones who can competently direct this reproduction effort. The science communication researchers who constructed the models can’t just tell them what to do because they don’t know enough about the critical details of the communication environment: who the relevant players are, what their stakes and interests might be, how they talk to each other, and whom they listen to. If researchers nevertheless accept the invitation to give “how to” advice, the best they will be able to manage are banalities—“Know your audience!”; “Grab the audience’s attention!”—along with Goldilocks admonitions such as, “Use vivid images, because people engage information with their emotions. . . but beware of appealing too much to emotion, because people become numb and shut down when they are overwhelmed with alarming images!”

Communicators possess knowledge of all the messy particulars that researchers not only didn’t need to understand but were obliged to abstract away from in constructing their models . Indeed, like all smart and practical people, the communicators are filled with many plausible ideas about how to proceed—more than they have the time and resources to implement, and many of which are not compatible with one another anyway. What experimental models—if constructed appropriately—can tell them is which of their surmises rest on empirically sound presuppositions and which do not. Exposure to the information such modeling yields will activate experienced-informed imagination on the communicators’ part, and enable them to make evidence-based judgments about which strategies they believe are most likely to work for their particular problem.

At that point, it is time for the scientist of science communication to step back in—or to join alongside the communicator. The communicator’s informed conjecture is now a hypothesis to be tested. In advising field communicators, science of science communication researchers should treat what the communicators do as experiments. Science communication researchers should work with the communicator to structure their communication strategies in a manner that yields valid observations that can be measured and analyzed.

Indeed, communicators, with or without the advice of science of science communication researchers, should not just go on blind instinct. They shouldn’t just read a few studies, translate them into a plausible-sounding plans of action, and then wing it. Their plausible surmises about what will work will be more plausible, more likely to work, than any that the laboratory researchers, indulging their own experience-free imaginations, concoct. But they will still be only plausible surmises. Still be only hypotheses. Without evidence, we will not learn whether policies based on such surmises did or didn’t work. If we don’t learn that, we won’t learn how to do even better.

Genuinely evidence-based science communication must be based on evidence all the way down. Communicators should make themselves aware of the existing empirical information that science communication researchers have generated (and steer clear of the myriad stories that department-store consumers of decision science work tell) about why the public is divided on climate science. They should formulate strategies that seek to reproduce in the world effects that have been shown to help counter the dynamics of motivated reasoning responsible for such division. Then, working with empirical researchers, they should observe and measure. They should collect appropriate forms of pretest or preliminary data to try corroborate that the basis for expecting a strategy to work is sound and to calibrate and refine its elements to maximize is expected effect. They should also collect and analyze data on the actual impact of their strategies once they’ve been deployed.

Finally, they should make the information that they have generated at every step of this process available to others so that they can learn from it to. Every exercise in evidence-based science communication itself generates knowledge. Every such exercise itself furnishes an instructive model of how that knowledge can be intelligently used. The failure to extract and share the intelligence latent in doing science communication perpetuates the dissipation of collective knowledge that it is the mission of the science of science communication to staunch. 

 


[1] Unrepresentative convenience samples are unlikely to generate valid insights on how to counteract motivated reasoning. Samples of college undergraduates are perfectly valid when there is reason to believe the cognitive dynamics involved operate uniformly across the population. But the mechanisms through which motivated reasoning generates polarization on climate change don’t; they interact with diverse characteristics—worldviews and values, but also gender, race, religiosity, and even regions of residence. It is known, for example, that white males who are highly hierarchical and individualistic in worldviews or conservative in their political ideologies, and who are likely to live in the South and far west, tend to react dismissively to information about climate change (McCright & Dunlap 2013, 2012, 2011; Kahan, Braman, Gastil, Slovic & Mertz 2007). Are they likely to respond to a “framing” strategy in the same way that a sample of predominantly female undergraduates attending a school in New York City does (Feygina, Jost & Goldsmith 2010)? If not, that’s a good reason to avoid using such a sample in a framing study, and not to base practical decisions on any study that did.

Thursday
Apr172014

Vaccine risk perceptions and risk communication: study conclusions & recommendations

From CCP's "Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment" report: 

II. Summary conclusions

A. Findings

1.   There is deep and widespread public consensus, even among groups strongly divided on other issues such as climate change and evolution, that childhood vaccinations make an essential contribution to public health. A very large supermajority believes that the benefits of childhood vaccinations outweigh their risks and that public health generally would suffer were vaccination rates to fall short of the goals set by public health authorities.


2.   In contrast to other disputed science issues, public opinion on the safety and efficacy of childhood vaccines is not meaningfully affected by differences in either science comprehension or religiosity. Public controversies over science, including those over evolution and climate change, often feature conflict among individuals of varying levels of religiosity, whose difference of opinion intensify in proportion to their level of science comprehension. There is no such division over vaccine risks and benefits.

3.   The public’s perception of the risks and benefits of vaccines bears the signature of a generalized affective evaluation, which is positive in a very high proportion of the population. The high degree of coherence in responses to items relating to the contribution that childhood vaccinations make to public health strongly implies that public assessments of vaccine risks and benefits reflect a unitary latent affective orientation. The distribution of that orientation is strongly skewed in a positive direction—indicating that a substantial majority of the population (in the vicinity of 75%) has a positive attitude toward childhood vaccines.

4.   Among the manifestations of the public’s positive orientation toward childhood vaccines is the perception that vaccine benefits predominate over vaccine risks and a high degree of confidence in the judgment of public health officials and experts. By large supermajorities, the survey participants endorsed the proposition that vaccine benefits outweigh their risks, and rejected claims that deterioration in vaccination coverage would pose no serious public health danger. They also expressed confidence in the judgment of officials who identify which vaccinations should be universally administered, and in the judgment of experts that vaccines are safe.

5.   Perceptions of the relationship between vaccines and specified diseases reflect the same positive affective orientation that informs public perceptions of the contribution that childhood vaccines make to public health generally. Responses to items on the link between vaccines and autism, cancer, diabetes—as well as a fictional disease not asserted by anyone to be connected to childhood vaccinations—displayed the same pattern as the responses to all the other public-health items. Under these circumstances, responses to these items can confidently be viewed only as indicators of the same latent affective attitude reflected in the public’s assessments of the contribution childhood vaccines make to public health generally. Public health officials should resist the mistake of construing responses to survey items such as these as measuring public knowledge about or beliefs on specific issues relating to childhood vaccinations.

6.   The demographic characteristics and political outlooks typically associated with group conflict over risk and related aspects of decision-relevant science are not meaningfully associated with disagreement about childhood-vaccination risks. Members of all such groups believe that vaccine risks are low, vaccine benefits high, and mandatory vaccination policies appropriate. Those who believe otherwise are outliers in every one of these groups.

7.   There is no meaningful association between concern over vaccine risks and the sharp cultural cleavage that characterizes perceptions of either “public safety risks,” a cluster of putative hazards associated with environmental issues and gun control, or “social deviancy risks,” a cluster associated with legalization of marijuana and prostitution and with teaching high school students about birth control. The opposing cultural allegiances that are associated with disputed societal and public health risks do not generate meaningful disagreement over vaccine risks and benefits. At most, such dispositions mildly influence the intensity with which culturally diverse members of the public approve of childhood vaccination.

8.   Existing universal vaccination policies appear to enjoy widespread support, but proposals to restrict existing grounds for exemption divide the public along partisan lines. Despite support for universal vaccination policies and widespread disapproval of parents who refuse to permit vaccination of their children based on concerns about vaccine risks, proposals to restrict or eliminate moral or religious grounds for opting out of vaccination requirements provoke dissensus along largely partisan lines consistent with citizens’ general orientation toward government regulation.

9.   The public generally underestimates vaccination rates and overestimates the rate of exemption. Only 9% of the survey respondents recognized that the vaccination rate among U.S. children aged 19-35 months for recommended childhood vaccinations has been over 90% in recent years. The median estimate was between 70-79%. The median estimate of children receiving no vaccinations was 2-10%; only 9% correctly indicated that less than 1% of children aged 19-35 months receive none of the recommended childhood vaccinations.

10.  Communications that assert the existence of growing concern over vaccination risks and declining vaccination rates magnify misestimations of vaccination rates and of exemptions. Experiment subjects who read communications patterned on real media communications underestimated vaccine coverage by an even larger amount than subjects in the control.

11.  Communications that connect “growing concern” over vaccine risks to disbelief in evolution and climate change generate cultural polarization. Relative to their counterparts in a control condition, experiment subjects exposed to such a communication divided along lines that reflected their predispositions toward currently disputed societal risks.

12.  Factually accurate information on vaccine rates, when issued by the CDC, substantially corrects underestimation of vaccination rates. Exposure to a story patterned on the press statements that the CDC typically issues in connection with annual NIS updates resulted in a significant correction of experiment subjects’ underestimation of national vaccination coverage.

B. Normative and prescriptive conclusions

1.   Risk communicators—including journalists, advocates, and public health professionals—should refrain from conveying the false impression that a substantial proportion of parents or of the public generally doubts vaccine safety. Such information risks creating anxiety rather than dispelling it. Moreover, by aggravating underestimation of vaccination rates, communications of this nature obscure a signal that conveys public confidence in vaccine safety and stimulates reciprocal motivations to contribute to the collective good of herd immunity.

2.   Risk communicators should avoid resort to the factually unsupportable, polemical trope that links vaccine risk concerns to climate-change skepticism and to disbelief in evolution as evidence of growing societal distrust in science. Such rhetoric, in addition to being facile, risks generating an affective or symbolic link between vaccines and issues on which cultural polarization is currently a significant impediment to public science communication.

3.   Risk communicators, including public health officials and professionals, should aggressively disseminate true information on the historically and continuing high rates of childhood vaccination in the U.S. The high levels of vaccination in the U.S. are a science communication resource. That resource should be exploited, not obscured or dissipated.

4.   Because there is a chance that it would make mandatory vaccination policies a matter of partisan contestation, campaigns to promote legislative elimination or contraction of existing grounds for exemptions should be viewed with extreme caution. There is reason to believe—from real-world experience as well as the results of this study—that proposals to restrict nonmedical exemptions from existing mandates would generate partisan division in the public. As evidenced by the controversy over the HPV vaccine, such divisions disrupt the processes by which ordinary citizens recognize and orient themselves with respect to the best-available evidence on public-health and other risks. Accordingly, the potential for creating polarization over childhood vaccination risks is a cost that must be balanced against whatever benefit might be obtained from reforms in law aimed at reducing the already very low percentage of parents that exempt their children from mandatory vaccination.

5.   Vaccine-risk assessments and communication should not be based on creative extrapolations from general theories. Because decision-science mechanisms can be imaginatively manipulated to support a wide variety of explanations and prescriptions, it is a mistake to present theoretical syntheses of work in this field as a guide for action. Instead, conjectures informed by decision-science frameworks should be treated as hypotheses for empirical investigation.

6.   Hypotheses relating to vaccine-risk perceptions and vaccine-risk communication should be tested with valid empirical methods specifically suited to measuring matters of consequence. Opinion polls cannot be expected to generate significant insight into vaccine risk perceptions, either on the part of parents, whose responses are unreliable indicators of behavior, or the general public, in whom demographic and attitudinal measures fail to explain practically meaningful levels of variance. Rather, behavioral measures (including validated attitudinal indicators of behavior) should be used to gauge parental risk concern and fine-grained, local methods used to investigate the characteristics of enclaves of demonstrated vaccine hesitancy.

7.   The public health establishment should take the initiative to develop comprehensive proposals for better integrating the science of science communication into its culture and practices. Procedures should be adopted, within government public health agencies and within the medical profession, for making use of the best available empirical methods for anticipating and averting influences that distort public risk perceptions. The public health establishment should also propagate professional norms geared to curbing ill-informed and ill-considered forms of ad hoc risk perception by the media and by individual members of the public-health establishment. The most effective step to discouraging this form of feral risk communication is to populate the niche it now occupies with an empirically informed and systematically planned alternative.

Wednesday
Apr092014

More on "Krugman's symmetry proof": it's not whether one gets the answer right or wrong but how one reasons that counts 

Okay, I've finally caught my breath after laughing myself into state of hyperventilation as a result of reading Krugman's latest proof (this is actually a replication of an earlier empirical study on his part) that ideologically motivated reasoning is in fact perfectly symmetric with respect to right-left ideology.

Rather than just guffawing appreciatively, it's worth taking a moment to call attention to just how exquisitely self-refuting his "reasoning" is!

There's the great line, of course, about how his "lived experience" (see? I told you, he's doing empirical work!) confirms that motivated cognition "is not, in fact, symmetric between liberals and conservatives."

But what comes next is an even more subtle -- and thus an even more spectacular! -- illustration of what it looks like when one's reason is deformed by tribalism: 

Yes, liberals are sometimes subject to bouts of wishful thinking. But can anyone point to a liberal equivalent of conservative denial of climate change, or the “unskewing” mania late in the 2012 campaign, or the frantic efforts to deny that Obamacare is in fact covering a lot of previously uninsured Americans?

Uh, no, PK. I mean seriously, no.

The test for motivated cognition is not whether someone gets the "right" answer but how someone assesses evidence.

A person displays ideologically motivated cognition when, instead of weighing evidence based on criteria related to its connection to the truth, he or she credits or dismisses it based on its conformity to his or her ideological predispositions.

Thus, if we want to use public opinion on some issue -- say, climate change -- to assess the symmetry of ideologically motivated reasoning, we can't just say, "hey, liberals are right, so they must be better reasoners."

Rather we must determine whether "liberals" who "believe" in climate change differ from "conservatives" who "don't" in how impartially they weigh evidence supportive of & contrary to their respective positions. 

How might we do that?  

Well, one way would be to conduct an experiment in which we manipulate the ideological motivation people with "liberal" & "conservative" values have to credit or dismiss one and the same piece of valid evidence on climate change.  

If "liberals" (it makes me shudder to participate in the flattening of this term in contemporary political discourse) adjust the weight they give this evidence depending on its ideological congeniality, that would support the inference that they are assessing evidence in a politically motivated fashion.  

If in aggregate, in the real world, they happen to "get the right" answer, then they aren't to be commended for the high quality of their reasoning.  

Rather, they are to be congratulated for being lucky that a position they unreasoningly subscribe to happens to be true.

And vice versa if the "truth" happens (on this issue or any other) to align with the position that "conservatives" unreasoningly affirm regardless of the quality of the evidence they are shown.

That Krugman is too thick to see that one can't infer anything about the quality of partisans' reasoning from the truth or falsity of their beliefs is ... another element of Krugman's proof that ideological reasoning is symmetric across right and left!

For in fact, "the 'other side' is closed-minded" is one of the positions that partisans are unreasoningly committed to. 

One of the beliefs that they don't revise in light of valid evidence but rather use in lieu of truth-related criteria to assess the validity of whatever evidence they see.

This proposition is supported by real, honest-to-god empirical evidence -- of the sort collected precisely because no one's personal "lived experience" is a reliable guide to truth.

That PK is innocent of this evidence is-- another element of his proof that ideological reasoning is symmetric across right and left!

As is his unfamiliarity with studies that use the design I just suggested to test whether "liberals" are forming their positions on climate change and other issues in a manner that is free of the influence of politically motivated reasoning.  Not surprisingly, these studies suggest the answer is no.

But does that mean that all liberals who believe in climate change believe what they do because of ideologically motivated cognition? Or that only someone who is engaged in that particular form of defective reasoning would form that belief?

If you think so, then, despite your likely ideological differences, you & Paul Krugman have something in common: you are both very poor reasoners.

Tuesday
Apr082014

Finally: decisive, knock-down, irrefutable proof of the ideological symmetry of motivated reasoning

Sometimes something so amazingly funny happens that you have to pinch yourself to make sure you aren't really just a celluar automaton in a computer-simulated comedy world.

 

 

 

N = 800 Krugmans. from Kahan, Judgment & Decision Making, 8, 427-34 (2013)

 

Tuesday
Apr082014

Are Ludwicks more common in the UK?!

Well, much like the administrators of the Affordable Health Care Act , I’ve learned the hard way how difficult it can be to anticipate and manage an excited tidal wave of interest surging through the internet toward one’s web portal.

Yes, “tomorrow” has arrived, but because I’ve been inundated with so many 10^3’s of serious entries for the latest MAPKIA, I’ve been unable to process them all, even with the help of my CCP state-of-the-art “big data” MAPKIA automated processor [cut & paste: http://www.palantir.net/2001/tma1/wav/foolprf.wav]

So taking a page from the President’s playbook, I’m extending the deadline of “tomorrow” to “tomorrow,” which is when I’ll post the “results” of the “Where is Ludwick” MAPKIA. In the meantime, entries will continue to be accepted.

But while we wait, how about some related info relevant to an issue that came up in discussion of the ongoing MAPKIA?

In response to my observation that Ludwick’s are “rare”—less than 3% of the U.S. population--@PaulMathews stated that “Ludwicks are not a rare species” in the UK but rather

are quite common. For example, two of our most prominent climate campaigners, Mark Lynas and George Monbiot, are pro-nuclear and pro-GMO.

Well it so happens that I have data that enables an estimation of the population frequency of Ludwicks—that is, individuals who are simultaneously (a) concerned about climate change risk but not much concerned about the risks of (b) nuclear power and (c) GM foods—in England.

Not the UK, certainly, but I think better evidence of what the true frequency is in the UK than reference to a list of commentators (indeed, compiling lists of “how many of x” one can think of is clearly an invalid way to estimate such things, given the obvious sampling bias involved, not to mention the abundant number of even people with very rare combinations of whatever in countries with populations in the tens or hundreds of millions). 

It turns out that Ludwicks are even rarer in England than in the U.S.  Consider:


Again, a scatterplot of survey respondents (1300 individuals from a nationally representative sample of individuals recruited to participate in CCP “cross-cultural cultural cognition” studies—including  the one in our forthcoming paper “Geoengineering and Climate Change Polarization”) arrayed in relationship to their perceptions of nuclear power and climate change risks.

I’ve defined a Ludwick as an individual whose scores on a 0-10 industrial strength risk perception measure  (ISRPM10) are ≥ 9 for global warming, ≤ 2 for nuclear power, and ≤ 2 for GM foods.

Those numbers are pretty close equivalents for the scores I used to compute U.S. Ludwicks on the 0-7 industrial strength risk perception measure (≥ 6, ≤ 2, & ≤ 2, respectively) in the data set I used for the MAPKIA (I determined equivalence by comparing the z-scores on the respective ISRPM7 and ISRPM10 scales).

As I said, less than 3% of the US population holds the Ludwick combination of risk perceptions.

But in England, less than 2% do!

But @PaulMathews shouldn’t feel bad—it’s just not easy to gauge these things by personal observation! I trust my own intuitions, and those of any socially competent and informed observe (@Paulmathews  certainly is) but verify with empirical measurement to compensate for the inevitably partial perspective any individual is constrained to have.

There are some other cool things that can be gleaned from this cross-cultural comparison—ones, in fact, that definitely surprised me but might well have informed @Paulmathews’ conjecture.

One is that there’s not nearly as much of an affinity between climate change risk perceptions and nuclear ones in the England  (r = 0.26, p < 0.01) as there is in the U.S. (r = 0.47, p < 0.01).

The reason that this surprised me is that in our study of “cross-cultural cultural cognition,” we definitely found that climate change risk perceptions in England fit the cultural-polarization profile (“hierarch individualists, skeptical” vs. “egalitarian communitarians, concerned”) that is familiar here.

Another thing: while the population frequency of Ludwicks is lower than in England than in the U.S., the probability of being a Ludwick conditional on holding the nonconformist pairing of high concern for climate and low for nuclear risks is higher in England.

In the scatterplot of English respondents, I’m defining the “Monbiot region” as the space occupied by survey respondents whose ISRPM10 scores for global warming and nuclear were  ≥ 9 for global warming, ≤ 2, respectively.

The analogous neighborhood in the U.S. is the “Ropeik region” (global warming ISRPM7 ≥ 6 and nuclear power ISRPM7 ≤ 2).

Whereas about 33% of the residents of the U.S. Ropeik region are Ludwicks, over 60% of the residents of Monbiot are Ludwicks.

Huh!

What does this signify?

No doubt something interesting, but I’m not sure what!

Do others have views? People who have a better grasp of English cultural meanings & who would be more likely than I to venture sensible interpretations (ones, obviously, that would still need to be empirically verified, of course)?

Could this information be of any use in constructing a successful Ludwick profile in the US (or in England for that matter)?

Saturday
Apr052014

Cognitive illiberalism & expressive overdetermination ... a fragment

from Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007).

Conclusion

The nature of political conflict in our society is deeply paradoxical. Despite our unprecedented knowledge of the workings of the natural and social world, we remain bitterly divided over the dangers we face and the efficacy of policies for abating them. The basis of our disagreement, moreover, is not differences in our material interests (that would make perfect sense) but divergences in our cultural worldviews. By virtue of the moderating effects of liberal market institutions, we no longer organize ourselves into sectarian factions for the purpose of imposing our opposing visions of the good on one another. Yet when we deliberate over how to secure our collective secular ends, we end up split along exactly those lines.

The explanation, I’ve argued, is the phenomenon of cultural cognition. Individual access to collective knowledge depends just as much today as it ever did on cultural cues. As a result, even as we become increasingly committed to confining law to attainment of goods accessible to persons of morally diverse persuasions, we remain prone to cultural polarization over the means of doing so. Indeed, the prospect of agreement on the consequences of law has diminished, not grown, with advancement in collective knowledge, precisely because we enjoy an unprecedented degree of cultural pluralism and hence an unprecedented number of competing cultural certifiers of truth.

If there’s a way to mitigate this condition of cognitive illiberalism, it is by reforming our political discourse. Liberal discourse norms enjoin us to suppress reference to partisan visions of the good when we engage in political advocacy. But this injunction does little to mitigate illiberal forms of status competition: because what we believe reflects who we are (culturally speaking), citizens readily perceive even value-denuded instrumental justifications for law as partisan affirmations of certain worldviews over others.

Rather than implausibly deny our cultural partiality, we should embrace it. The norm of expressive overdetermination would oblige political actors not just to seek affirmation of their worldviews in law, but to cooperate in forming policies that allow persons of opposing worldviews to do so at the same time. Under these circumstances, citizens of diverse cultural orientations are more likely to agree on the facts—and to get them right—because expressive overdetermination erases the status threats that make individuals resist accurate information. But even more importantly, participation in the framing of policies that bear diverse meanings can be expected to excite self-reinforcing, reciprocal motivations that make a culture of political pluralism sustainable.

Ought, it is said, implies can. Contrary to the central injunction of liberalism, we cannot, as a cognitive matter, justify laws on grounds that are genuinely free of our attachments to competing understandings of the good life. But through a more sophisticated understanding of social psychology, it remains possible to construct a form of political discourse that conveys genuine respect for our cultural diversity.

Friday
Apr042014

Let's keep discussing Ludwick!

Nothing to say today that would be as interesting as the points people are making in response to the"MAPKIA!" challenge in "yesterday's" post.  Join in the discussion -- & submit your entry! It's a little bit like doing presidential polls 2.5 yrs in advance of the next election, but @Jen is definitely the frontrunner at this stage.

Wednesday
Apr022014

MAPKIA! Episode 49: Where is Ludwick?! Or what *type* of person is worried about climate change but not about nuclear power or GM foods?

Time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

By now all 14 billion regular readers of this blog can recite the rules of "MAPKIA!" by heart, but here they are for new subscribers (welcome, btw!):

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will be posted "tomorrow."  The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.) 

Okay—we have a real treat for everybody: a really really really fun and really really really hard "MAPKIA!" challenge (much harder than the last one)!

The idea for it came from the convergence of a few seemingly unrelated influences.

One was an exchange I had with some curious folks about the relationship between perceptions of the risks of climate change, nuclear power, & GM foods.

Actually, that exchange already generated one post, in which I presented evidence (for about the umpteenth time) that GM food risks perceptions are not politically or culturally polarized in the U.S., and indeed, not even part of the same “risk perception family” (that was the new part of that post) as climate and nuclear.  

Responding to this person’s (reasonable & common, although in fact incorrect) surmise that GM food risk perceptions cohere with climate and nuclear ones, I had replied that it would be more interesting to see if it were possible to “profile” individuals who are simultaneously (a) climate-change risk sensitive, and (b) nuclear-risk and (c) GM food risk skeptical.

Right away, Rachel Ludwick (aka @r3431) said, “That would be me.”

So I’m going to call this combination of risk perceptions the “Ludwick” profile.

Why should we be intrigued by a Ludwick?

Well, anyone who is simultaneously (a) and (b) is already unusual. That’s because climate change risks and nuclear ones do tend to cohere, and signify membership in one or another cultural group.

In addition, the co-occurrence of those positions with (c)—GM food risk skepticism—strikes me as indicating a fairly discerning and reflective orientation toward scientific evidence on risk.

Indeed, one doesn’t usually see discerning, reflective orientations that go against the grain, culturally speaking.

On the contrary, higher degrees of reflection—as featured in various critical reasoning measures—usually are associated with even greater cultural coherence in perceptions of politically contested risks and hence with even greater political polarization.

A Ludwick seems to be thoughtfully ordering a la carte in a world in which most people (including the most intelligent ones) are consistently making the same selection from the prix fixe menu.

That is the second thing that made me think this would be an interesting challenge.  I am interested in (obsessed with) trying to identify dispositional indicators that suggest a person is likely to be a reflective cultural nonconformist.

Unreflective nonconformits aren’t hard to find. Indeed, being nonconformist is associated with being bumbling and clueless.

As I’ve explained 43 times before, it’s rational for people to fit their perceptions of risk to their cultural commitments, since their stake in fitting in with their group tends to dominate their stake in forming “correct” perceptions of societal risk on matters like climate change, where one’s personal views have no material effect on anyone’s exposure to the risk in question.

Accordingly, failing to display this pattern of information processing could be a sign that one is socially inept or obtuse.  That’s one way to explain why people who are low in critical reasoning capacities tend to be the ones most likely to form group-nonconvergent beliefs on culturally contested risks (although even for them, the “nonconformity effect” isn’t large).

It would be more interesting, then, to find a set of characteristics indicates a reflective disposition to form truth-convergent (or best-evidence convergent) rather than group-convergent perceptions of such risks.  I haven’t found any yet. On the contrary, the most reflective people tend to conform more, as one would expect if indeed this form of information processing rationally advances their personal interests.

As I said, thought, the Ludwick combination of risk perceptions strikes me as evincing reflection.  Because it is also non-conformist with respect to at least two of its elements (climate-risk concerned, nuclear-risk skeptical),  being able to identify Ludwicks might lead to discovery of the elusive “reflective non-conformity profile”!

The last thing that influenced me to propose this challenge is another project I’ve been working on. It involves using latent risk dispositions to predict individual perceptions of risk.  The various statistical techniques one can use for such a purpose furnish useful tools for identifying the Ludwick profile.

So everybody, here’s the MAPKIA:

What “risk profiling” (i.e., latent disposition) model would enable someone to accurately categorize individuals drawn from the general population as holding or not holding the Ludwick combination of risk preferences?

Let me furnish a little guidance on what a “successful” entry in this contest would have to look like and the criteria one (that one being me, in particular) might use to assess the same.

To begin with, realize that a Ludwick is extremely rare.  

For purposes of illustration, here’s a scatter plot of the participants in an N = 2000 nationally representative survey arrayed with respect to their global warming and nuclear power risk perceptions, indicated by their responses to the “industrial strength risk perception measure” (ISRPM).

click me if you want to see the scatterplot w/o the text & arrows & such!I’ve color coded the respondents with respect to their GM food risk perceptions, measured the same way: blue for “skeptical” (≤ 2), mud brown for “neutral” (3-5), and red for “sensitive” (≥ 6).

So where is @r3431, aka “Rachel Ludwick”?!

Presumably, she’s one of the blue observations within the dotted circle.

The circle marks the zone for “climate change risk sensitive” and “nuclear risk skeptical,” a space we’ll call the “Ropeik region.”

A “Ropeik,” who will be investigated in a future post, is a type who is very worried about climate change but regards the water used to cool nuclear reactor rods as a refreshing post-exercise drink.  The Ropeik region is very thinly populated--not necessarily on account of radiation sickness but rather on account of the positive correlation (r = 0.47, p < 0.01) between global warming concerns and nuclear power ones.

The correlation  between worrying about global warming & worrying about GM foods is quite modest (r = 0.26, p < 0.01) .

But there definitely is one.

Accordingly, someone who is GM food risk skeptical is even less likely to be in the Ropeik region (where people are very concerned about climate change) than somewhere else.

Those are the Ludwicks.  They exist, certainly, but they are uncommon.

Actually, if we define them as I have here in relation to the scores on the relevant ISRPMs, they make up about 3% of the population.

Maybe that is too narrow a specification of a Ludwick? 

For sure, I’ll accept broader specifications in evaluating "MAPKIA!" entries—but only from entrants who offer good accounts, connected to cogent theories of who these  Ludwicks are, for changing the relevant parameters.

Of course, such entrants, to be eligible to win the great prize (either this or something like it) to be awarded to the winner of this "MAPKIA!" would also need to supply corresponding “profiling” models that “accurately categorize” Ludwicks.

What do I have in mind by that?

Well, I’ll show you an example.

I start with a “theory” about “who fears global warming, who doesn’t, and why.”  Based on the cultural theory of risk, that theory posits that people with egalitarian and communitarian outlooks will be more predisposed to credit evidence of climate change, and people—particularly white males—with hierarchical and individualistic outlooks more predisposed to dismiss it. 

Because these predispositions reflect the rational processing of information in relation to the stake such individuals have in protecting their status within their cultural groups, my theory also posits that the influence of these predispositions will be increase as individuals become more “science comprehending”—that is, more capable of making sense of empirical evidence and thus acquiring scientific knowledge generally.

A linear regression model specified to reflect that theory explains over 60% of the variance in scores on the global warming ISRPM.

I can then use the same variables—the same model—in a logistic regression to predict the probability that someone is a “climate change believer” (global warming ISRPM  ≥ 6) and the probability someone is a “climate change skeptic” (global warming ISRPM  ≤ 2).

(Someone who read this essay before I posted it asked me a good question: what’s the difference between this classification strategy and the one reflected in the  popular and very interesting “6 Americas” framework? The answer is that the “6 Americas scheme” doesn't predict who is skeptical, concerned, etc. Rather, it simply classifies people on the basis of what they say they believe about climate change. A latent-disposition model, in contrast, classifies people based on some independent basis like cultural identity that makes it possible to predict which global warming "America" members of the general population live in without having to ask them.)

Classifying someone as one or the other so long as he or she had a predicted probability > 0.5 of having the indicated risk perception, the model would enable me to determine whether someone drawn from the general population is either a "skeptic" or a "believer" (your choice!) with a success rate of around 86% for “skeptics” and 80% for “believers.” 

How good is that?

Well, one way to answer that question is to see how much better I do with the model than I’d do if the only information I had was the population frequency of skeptics and believers.

“Skeptics” (ISRPM ≤ 2) make up 26% of my general population sample. Accordingly, if I were to just assume that people selected randomly from the population were not “skeptics” I’d be “predicting” correctly 74% of the time.

With the model, I’m up to 86%--which means I’m predicting correctly in about 46% of the cases in which I would have gotten the answer wrong by just assuming everyone was a nonskeptic.

“Believers” (global warming ISRPM ≥ 6) make up 35% of the sample.  Because I can improve my “prediction” proficiency relative to just assuming everyone is a nonbeliever from 65% to 80%, the model is getting the right answer in 42% of the cases in which I’d have had gotten the wrong one if the only guide I had was the “believer” population frequency.

Those measures—46% and 42%--reflect the “adjusted count R2” measure of the “fit” of my classification model.

There are other interesting ways to assess the predictive performance of these models, too—and likely I’ll say more about that “tomorrow.”

But “how good” a predictive model is is a question that can be answered only with reference to the goals of the person who wants to use it.  If it improves her ability relative to “chance,” does it improve it enough, & in the way one careas about (reducing false positives vs. reducing false negatives),  to make using it worth her while?

But for now, consider GM food risk perceptions.

As I’ve explained a billion times, one won’t do a very good job “profiling” someone who is GM food risk sensitive or GM food risk-skeptical by assimilating GM food risks to the “climate change risk family.”

If I use the same latent predisposition model for GM food risk perceptions that I just applied for global warming risk perceptions, I explain only 10% of the variance in the GM food ISRPM (as opposed to over 60% for global warming ISRPM).

When I try to predict GM food risk “skeptics” (ISRPM ≤ 2) and GM food risk “believers” (ISRPM ≥ 6), I end up with correct-classification rates of 79% and 71%, respectively.

That might sound good—but it isn’t.

In fact, that sort of “predictive proficiency” sucks. 

GM food “skeptics” make up 22% of the population—meaning that 78% of people are not skeptical.  My 79% predictive accuracy rate has an adjusted count R2 of 0.03, and is likely to be regarded as pitiful by anyone who wants to do anything, or at least anyone who wants to do something besides publish a paper with “statistically significant” regression coefficients (I've got a bunchin my GM food "skeptic" model--BFD!), on the basis of which he or she misleadingly claims to be able to “explain” or “predict” who is a GM food risk skeptic!

For GM food “believers,” my 71% predictive accuracy compares with a 70% population frequency (30% of the sample are “believers,” defined as ISRPM ≥ 6).  An adjusted count R2 of 0.02: Woo hoo!  (Note again that my model has a big pile of “statistically significant” predictors—the problem is that the variables are predicting variance based on combinations of characteristics that don’t exist among real people).

In sum, we need a different theory, and a different model, of who fears what & why to explain GM food risk perceptions.

I don’t have a particularly good theory at this point.

But I do have a pile of hunches.

They are ones I can test, too, with potential indicators that I’ve featured in previous posts. These include

In constructing their Ludwick models, "MAPKIA!" entrants might want to consult those posts, too.

I’ll say more how I would use them to predict GM food risks “tomorrow,” when I post (or post the first) report on the MAPKIA entries.

So …on you marks… get set …

MAPKIA!

 


Friday
Mar282014

I ♥ NCAR/UCAR--because they *genuinely* ♥ communicating science (plus lecture slides & video)

Spent a great couple of days at NCAR/UCAR last week, culminating in a lecture on "Communicating Climate Science in a Polluted Science Communication Environment."

Slides here. Also, an amusing video of the talk here—one that consists almost entirely of forlorn-looking lectern.

There are 10^6 great things about NCAR/UCAR, of course.

But the one that really grabbed my attention on this visit is how much the scientists there are committed to the instrinsic value of communicating science. 

They want people —decisionmakers, citizens, curious people, kids (dogs & cats, even; they are definitely a bit crazy!)—to know what they know, to see what they see, because they recognize the unique thrill that comes from contemplating what human beings, employing science’s signature methods of observation and inference, have been able to discern about the hidden workings of nature.

Yes, making use of what science knows is useful—indeed, essential—for individual & collective well-being.

That’s a very good reason, too, to want to communicate science under circumstances in which one has good justification (i.e., a theory consistent with plausible behavioral mechanisms and supported by evidence) to believe that not knowing what’s known is causing people to make bad decisions.

But if you think that “knowing what’s known” is how people manage to align their decisionmaking with the best available evidence in all the domains in which their well-being depends on that; that their “not knowing” is thus the explanation for persistent states of public conflict over the best evidence on matters like climate change or nuclear power or the HPV vaccine; and that communicating what’s known to science is thus the most effective way to dispel such disputes, then you actually have a very very weak grasp of the science of science communication.

And if you think, too, that what I just wrote implies there is “no point” in enabling people to know, then you have just revealed that you are merely posing—to others, & likely even to yourself!—when you claim to care about science communication and science education.

I spent hours exchanging ideas with NCAR scientists--including ideas about how to use empirical evidence to perfect climate-science communication--and not even for one second did I feel I was talking to someone like that.

 

 

 

Thursday
Mar272014

The sources of evidence-free science communication practices--a fragment...

From something I'm working on...

Problem statement. Our motivating premise is that advancement of enlightened conservation policymaking  depends on addressing the science communication problem. That problem consists in the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent public conflict over policy-relevant facts to which that evidence directly speaks. As spectacular and admittedly consequential as instances of this problem are, states of entrenched public confusion about decision-relevant science are in fact quite rare. They are not a consequence of constraints on public science comprehension, a creeping “anti-science” sensibility in U.S. society, or the sinister acumen of professional misinformers.  Rather they are the predictable result of a societal failure to integrate two bodies of scientific knowledge: that relating to the effective management of collective resources; and that relating to the effective management of the processes by which ordinary citizens reliably come to know what is known (Kahan 2010, 2012, 2013).

The study of public risk perception and risk communication dates back to the mid-1970s, when Paul Slovic, Sarah Lichtenstein, Daniel Kahneman, Amos Tversky, and Baruch Fischhoff began to apply the methods of cognitive psychology to investigate conflicts between lay and expert opinion on the safety of nuclear power generation and various other hazards (e.g., Slovic, Fischhoff & Lichtenstein 1977, 1979; Kahneman, Slovic & Tversky 1982).  In the decades since, these scholars and others building on their research have constructed a vast and integrated system of insights into the mechanisms by which ordinary individuals form their understandings of risk and related facts. This body of knowledge details not merely the vulnerability of human reason to recurring biases, but also the numerous and robust processes that ordinarily steer individuals away from such hazards, the identifiable and recurring influences that can disrupt these processes, and the means by which risk-communication professionals (from public health administrators to public interest groups, from conflict mediators to government regulators) can anticipate and avoid such threats and attack and dissipate them when such preemptive strategies fail (e.g., Fischhoff & Scheufele 2013; Slovic 2010, 2000; Pidgeon, Kasperson & Slovic 2003; Gregory, McDaniels & Field 2001; Gregory & Wellman 2001).

Astonishingly, however, the practice of science and science-informed policymaking has remained largely innocent of this work.  The persistently uneven success of resource-conservation stakeholder proceedings, the sluggish response of local and national governments to the challenges posed by climate-change, and the continuing emergence of new public controversies such as the one over fracking—all are testaments (as are myriad comparable misadventures in the domain of public health) to the persistent failure of government institutions, NGOs, and professional associations to incorporate the science of science communication into their efforts to promote constructive public engagement with the best available evidence on risk.

This disconnection can be attributed to two primary sources.  The first is cultural: the actors most responsible for promoting public acceptance of evidence-based conservation policymaking do not possess a mature comprehension of the necessity of evidence-based practices in their own work.  For many years, the work of conservation policymakers, analysts, and advocates has been distorted by the more general societal misconception that scientific truth is “manifest”—that because science treats empirical observation as the sole valid criterion for ascertaining truth, the truth (or validity) of insights gleaned by scientific methods is readily observable to all, making it unnecessary to acquire and use empirical methods to promote its public comprehension (Popper 1968).

Dispelled to some extent by the shock of persistent public conflict over climate change, this fallacy has now given way to a stubborn misapprehension about what it means for science communication to be truly evidence based.  In investigating the dynamics of public risk perception, the decision sciences have amassed a deep inventory of highly diverse mechanisms (“availability cascades,” “probability neglect,” “framing effects,” “fast/slow information processing,” etc.). Used as expositional templates, any reasonably thoughtful person can construct a plausible-sounding “scientific” account of the challenges that constrain the communication of decision-relevant science (e.g., XXXX 2007, 2006, 2005). But because more surmises about the science communication problem are plausible than are true, this form of story-telling cannot produce insight into its causes and cures. Only gathering and testing empirical evidence can.

Sadly, some empirical researchers have contributed to the failure of practical communicators to appreciate this point. These scholars purport to treat general opinion surveys and highly stylized lab experiments as sources of concrete guidance for actors involved in communicating science relevant to risk-regulation or related policy issues (e.g., XXX 2009). Such methods have yielded indispensable insight into general mechanisms of consequence to science communication. But they do not—because they cannot—furnish insight into how to engage these mechanisms in particular settings in which science must be communicated.  The number of plausible surmises about how to reproduce in the field results that have been observed in the lab likewise exceeds the number that are true. Again,Paul Slovic & Sarah Lichtenstein, risk perception field research, Las Vegas, 1969! empirical observation and testing are necessary—in the field, for this purpose.  The number of researchers willing to engage in field-centered research, and unwilling to acknowledge candidly the necessity of doing so, has stifled the emergence of a genuinely evidence-based approach to the promotion of public engagement with decision-relevant science (Kahan 2014).

The second source of the disconnect between the practice of science and science-informed policymaking, on the one hand, and the science of science communication, on the other, is practical: the integration of the two is constrained by a collective action problem.  The generation of information relevant to the effective communication of decision-relevant science—including not only empirical evidence of what works and what does not but also practical knowledge of the processes for adapting and extending it in particular circumstances—is a public good.  Its benefits are not confined to those who invest the time and resources to produce it but extend as well to any who thereafter have access to it.  Under these circumstances, it is predictable that producers, constrained by their own limited resources and attentive only to their own particular needs, will not invest as much in producing such information, and in a form amenable to the dissemination and exploitation of it by others, as would be socially desirable.  As a result, instead of progressively building on their successive efforts, each initiative that makes use of evidence-based methods to promote effective public engagement with conservation-relevant science will be constrained to struggle anew with the recurring problems.

This proposal would attack both of sources of the persistent inattention to the science of science communication....

References

Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).

Gregory, R. & McDaniels, T. Improving Environmental Decision Processes. in Decision making for the environment : social and behavioral science research priorities (ed. G.D. Brewer & P.C. Stern) 175-199 (National Academies Press, Washington, DC, 2005).

Gregory, R., McDaniels, T. & Fields, D. Decision aiding, not dispute resolution: Creating insights through structured environmental decisions. Journal of Policy Analysis and Management 20, 415-432 (2001).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change: How Information Shapes Our Common Future, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).

Pidgeon, N.F., Kasperson, R.E. & Slovic, P. The social amplification of risk (Cambridge University Press, Cambridge ; New York, 2003).

Popper, K.R. Conjectures and refutations : the growth of scientific knowledge (Harper & Row, New York, 1968).

Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).

Slovic, P. The perception of risk (Earthscan Publications, London ; Sterling, VA, 2000).

Slovic, P., Fischhoff, B. & Lichtenstein, S. Behavioral decision theory. Annu Rev Psychol 28, 1-39 (1977).

Slovic, P., Fischhoff, B. & Lichtenstein, S. Rating the risks. Environment: Science and Policy for Sustainable Development 21, 14-39 (1979).

Monday
Mar172014

Science comprehension ("OSI") is a culturally random variable -- and don't let anyone experiencing motivated reasoning tell you otherwise!

Here I've simply plotted "science comprehension" -score histograms for the four segments of a general population sample whose members have been divided in relation to their scores on the means of the "hierarchy-egalitarian" & "individualism-communitarianism" cultural worldview scales.

I suppose the figure could itself be used to measure motivated reasoning: If you perceive that one of these groups varies meaningfully in the disposition or apptitude that this particular scale measures, you might well be experiencing it!

But that won't make you any different from anyone else.  Rather than being embarrassed,  if you manage to catch yourself displaying this tendency, then you should be proud of yourself, for you'll be demonstrating a very unusual form of self-reflection--one much rarer than a "high" level of science comprehension.

The experience of catching yourself in this way will also likely fill you with apprehension over the number of times that you've no doubt experienced this pattern of thinking and did not catch yourself. Cultivating that sort of anxiety can't hurt either if you are trying to sharpen your powers or self-reflection -- or just trying to avoid becoming a boorish cultural sectarian whose interest in promoting public engagement with science is just a mask you don as you gear up for illiberal forms of status competition... 

BTW, this figure features the same "ordinary science intelligence" measure (I prefer that phrasing to "science literacy," which to me connotes an inventory of substantive bits of knowledge divorced from comprehension of & facility with the form of inferential reasoning needed to recognize valid science) that I've been futzing with for a while (despite its propensity to lead me into Alice-in-Wonderland style misadventures).

It combines the 11-item NSF indicator battery plus a 10-item "long cognitive reflection test."  It has the qualities that one would expect in/demand of a valid science comprehension measure, & has been productive of some pretty interesting insights into when people who have opposing cultural identities but who share a demonstrable proficiency in critical reasoning are more likely to converge or instead more likely to disagree than are less "science comprehending" members of their groups about a fact that admits of scientific investigation (e.g., the natural history of human beings or the reality and causes of climate change or GM foods or fracking or childhood vaccines). 

Maybe I'll write more "tomorrow" about the interesting psychometric properties of this OSI measure....

 

Thursday
Mar132014

Fracking freaks me out

So I said in my post “yesterday” that I’d share a “freakout” experience I had where data didn’t seem to fit my expectations in an area in  which I like to think I’m at least moderately well informed!

It occurred when I made a 3-day visit to the Ohio State University last week.

I had a great time!

I got to learn about the awesome convergence of interest across programs there in the science of science communication, reflected in the new initiative on Behavioral Decision Making.

I got to have lots of great conversations with amazing scholars, including (but not limited to!) my collaborator Ellen Peters, Hal Arkes, and Erik Nisbet.

And I got to do a workshop on “Motivated System 2 Reasoning,” in which I got a ton of good feedback from an audience that was as diverse in their backgrounds and perspectives as they were enthusiastic to engage (slides here).

Now, the freakout part occurred in connection with my participation in panel on fracking.

The panel was a “town meeting”-style event produced as part of the University’s “Health Science Frontiers” series. In the series, public-health and science issues are introduced by a panel discussion and then opened up for a broader discussion with audience members, all of whom are assembled in the studio of the University’s public television affiliate, which records the event for later broadcast.

Besides me, the panel for the fracking show included Mike Bisesi, a super-smart OSU environmental scientist, and Mark Somerson, a reporter for the Columbus Dispatch who has been doing very extensive, fine-grained coverage of public controversy over the expansion of fracking in Ohio.  The moderator, who displayed amazing craft!, was Mike Thompson, WOSU’s news director.

Not really sure what I could add to the discussion, I figured I’d at least be sure to make the point that most members of the general public don’t know what fracking is.

I mean this quite literally. 

A George Mason/Yale Climate Change Communication Project study found that 55% of the respondents in a nationally representative poll reported having heard “nothing” (39%) or only “a little” (16%) about fracking, and only 31% reported knowing either a “little” (22%) or a “lot” (9%).

These sorts of self-report measures, moreover, are known to overstate what people actually know about an emerging technology.

In a recent Pew survey, 51% were able to select the right answer to the question “what natural resource is extracted in a process known as ‘fracking’ ”—in a multiple-choice question in which the likelihood of getting the right answer by simply choosing randomly would have been 25%.  We can infer the percent who actually knew the answer, then, was lower than 50%--& surely no more than 46% (assuming, over generously, odds of 9:1 that any respondent who got the right answer knew rather than “guessed”).

The lack of familiarity with an emerging technology is a good thing to keep in mind when a group of people who are well-informed about and highly interested in a novel technology get together to talk about (among other things) “public attitudes” towards it.

Precisely because those people are well-informed and highly-interested, they will have been exposed to a very unrepresentative sample of opinions toward the technology, and are vulnerable for that reason to grossly overestimate the extent to which the risks it poses are a genuine matter of public dispute. 

This effect, moreover, will be magnified if those people, disregarding the biased nature of the samples that are the basis of their own observations, talk a lot to themselves and credit one another’s reports about who believes what and why about what is in fact a boutiquey issue in which most ordinary people don’t have views one way or the other.

This was one of the point’s I stressed in “yesterday’s” post, which noted the echo-chamber amplification of misimpressions about the extent and partisan nature of conflict over GM foods.  People who know a lot about it—particularly ones who write about it for the media and on-line—take it as gospel that the public is “polarized,” when in fact they just aren’t.

Why would they be? Most of them have no idea what GM foods are either (not to mention that they are consuming platefulls of them at pretty much every meal).

I anticipated that people attending the fracking session would likely be under the impression that “fracking” is a controversial issue that has the public up in arms, and I’d be able to say, “well, wait a second . . . .” In fact, I wasn’t really sure I’d have anything more to say!

Okay.

So we arrive at the studio for the event and tell the receptionist we are here for the “fracking panel.” 

“Fracking?,” she says. “What’s that?”

“Score!,” I think to myself. This exchange will make for a nice, concrete illustration of my (solitary) point.

At this stage, Eric Nisbet, whom I had arrived with answered, “It’s a technique by which high pressure water mixed with various chemicals is used to fracture underground rock formations so that natural gas can be extracted.”

“Oh my god!,” the receptionist exclaimed. “That’s sounds terrifying! The chemicals—they’ll likely poison us. And surely there will be earthquakes!”

Seriously. 

And shit, I thought, now what am I going to say?

Actually, the receptionist’s response made things even better!

Because it turns out that even though people don’t know anything about fracking, there is reason to think that they -- or really about 50% of them-- will react the way she did as soon as they do.

That’s what is freaking me out!

Consider this snapshot of public opinion on climate change:

This is the “profile” of a “stage 3” science-communication pathology.

Not only is there intense political polarization (not just on “how serious” the risk of climate change is, btw, but also on more specific empirical issues like “whether the earth has been heating up” and “whether humans have caused it”; responses to the industrial strength risk perception measure will correlate very highly with responses to those more specific, “factual” issues).

The polarization is even more intense among individuals who know the most about science generally and who possess the aptitudes and critical reasoning skills most suited to understanding scientific evidence.

The reason “science comprehension” magnifies polarization, CCP research suggests, is that individuals of opposing cultural identities (ones you can often measure adequately with right-left political outlooks but can get an even more discerning glimpse of with the two-dimensional cultural worldview scales) are using their knowledge and reasoning proficiencies to fit all the evidence they see to the position that predominates in their group.

We see this not just on climate change, of course, but on other culturally contested issues like nuclear power and gun control.

But we don’t see it very often.  Indeed, the number of facts that are important for individual and collective decisionmaking that reflect this pattern is miniscule relative to the ones that don’t.

Consider medical x-rays and fluoridation of water:

 

No polarization, and as diverse individuals become more science comprehending, they tend to converge on the position that is most supported by the best (currently) available evidence.

And I could go on all day showing you graphics that look exactly like that! That pattern is the normal situation, the existence of which tends, for reasons similar to ones I’ve discussed already, tends to evade our notice & result in wild overestimations of the degree to which there is conflict over science in our society.

In fact, consider GM foods:

Despite what people think, there's no polarization to speak of here. It’s true, science comprehension seems to have a bigger effect in reducing risk perception among those who are more right-wing than it does on those who are more left- in their political outlooks.  But since the effect on both is to reduce concern, it’s hard to believe that that sort of difference portends political conflict over whether GM foods are risky.

The perception that these issues are part of the cluster of politically or socially controversial set of risk issues in our society is a consequence of the selection bias and echo chamber effects I also discussed above.

I’ve talked about these things before (and talked before about how it seems like everything I ever talk about is something I’ve already talked about).  And when I talk about GM foods, I usually add, “Of course, there isn’t political polarization over GM foods—most people don’t even know what they are!!”

But now consider fracking . . . :

WTF?!!!!!

This is a “stage 3” pathology picture!  

How could this be? After all, polarization that increases conditional on science comprehension is not the norm! And most people haven’t even heard of this friggin’ fracking thing!

I know, I know: many of you will say, “of course, the answer is blah blah blah”—an answer that will in fact be perfectly plausible.  But if that’s your instinct, you should teach yourself to stifle it. 

Everything is obvious once you know the answer!”  Before you knew it, moreover, the opposite was just as plausible.  If I’d shown you that fracking looked like medical x-rays or vaccines or GM foods, you would have said, “Of course—polarization that increases conditional on science comprehension is unusual, and no one even knows about GM foods, blah blah blah….”

More things are plausible than are true!

That’s why we look at evidence.

It’s why, too, it’s no embarrassment to learn that one’s plausible conjecture about a phenomenon is wrong! 

The only thing that would be embarrassing—and just plain wrong—would be the failure not to adjust one’s previous, plausible views in light of what new and surprising information shows.

So what’s going on?

I can only conjecture—in anticipation of yet more study. But here’s what I’d say.

In measuring subjects’ perceptions with the “industrial strength measure,” I defined fracking, parenthetically, in terms very much like the ones that Eric used to describe it to the receptionist.

As was the case for her, that was enough for the participants in the study to experience the sort of affective reaction to this technology that assimilated it to the putative risk sources--like climate change, and guns, and nuclear power—with which they are more familiar and on which they are already strongly divided along cultural lines.

The experience was even more intense among those highest in critical reasoning dispositions. But that makes sense to—for contrary to the dominant “instant decision science” (take 2 cups of “heuristics & biases” literature, add water & stir”) story-telling account of polarization over decision-relevant science, that phenomenon is not a consequence of overreliance on “System 1” heuristic reasoning.  Rather it is a form of information processing that rationally serves individuals’ interest in forming and persisting in perceptions of risk that express their stake in maintaining their status in affinity groups essential to their identities.

We might well infer from these data, then, that there is something pretty peculiar about fracking that makes it distinctively vulnerable—much more so, even, than GM foods, which after all have been around for decades and which advocacy groups incessantly try to transform into a culturally polarizing issue—to the pathology that characterizes climate change and other issues that display the “stage 3” pattern.

Indeed, one of the points of developing a science of science communication—one that tests conjectures as opposed to engaging in “instant decision science” —is to create forecasting tools that can spot public-deliberation disasters like the one over climate change or the HPV vaccine in advance and head them off.

But in that regard, we also shouldn’t assume that every novel technology that has this sort of special incitement quality will in fact become the an object of reason-distorting cultural status competition.

Nanotechnology, for example, displayed a similar sensitivity in a CCP experiment a few years ago, and now I’m pretty much convinced that that issue is a dud.

So – I don’t know!

But I want to: I want to know more about fracking, and about the mechanisms and processes that comprise our science communication environment.

So I'll collect even more data.

And expect --indeed, eagerly and excitedly embrace--even more surprise.

 

 

Monday
Mar102014

Who fears what & why? Trust but verify!

Patrick Moore, aka "@EcoSenseNow," posed this question to me:

 

Probably Patrick & a friend were involved in a discussion about whether those who are (aren't) concerned about climate change are the "same" people who are (aren't) concerned about nuclear power and GM food risks.

A discussion/argument like that is pretty interesting, if you think about it.

We all know that risk perceptions tend to come in intriguing packages -- intriguing b/c the correlations between the factual understandings they comprise are more plausibly explained by the common cultural meanings they express than by any empirical premises they share. 

E.g., imagine you were to say to me, "Gee, I wonder whether crime rates can be expected to up or instead to go down if one of the 40 or so states that now automatically issue a permit to carry a concealed handgun to any adult w/o a criminal record or a history of mental illness enacted a ban on venturing out of the house with a loaded pistol tucked unobtrusively in one's coat pocket?"

If I answered, "Well, I'm not sure, but I do have some valid evidence that human activity has caused the temperature of the earth to increase in recent decades--surely you can deduce the answer from that," you'd think either I was being facetious or I was an idiot (maybe both; they can occur together--I don't know whether they are correlated).

But if I were to run up to you all excited & say, "hey, look--I found a correlation between believing that the temperature of the earth has not increased as a result of human causes in recent decades and believing that banning concealed handguns would cause crime to increase," you'd probably say, "So? Only a truly clueless dolt wouldn't have expected that."

You'd say that -- & be right, as the inset graphic, which correlates responses to the "industrial strength risk perception measure" as applied to "private ownership of guns" and "global warming," illustrates -- b/c "everyone knows" (they can just see) that our society is densely populated with "types of people" who form packages of related empirical beliefs in which the reality & consequences of human-caused climate change are inversely correlated with beliefs about the dangers posed by private ownership of handguns in the U.S.

The "types" are ones who share certain kinds of commitments relating to how society and other types of collective enterprises should be organized.  We can all see our social world is like that but because we can't directly observe people's "types" (they & the dispositions they impart are “latent variables”), we come up with observable indicators, like cultural worldviews" &/or "political ideologies" & various demographic characteristics, that we can combine into valid scales or classifying instruments of one sort or another. We can use those to satisfy our curiosity about the nature of the types & the dynamics that generate the puzzling pattern of empirical beliefs that they form on certain types of disputed risk issues.

We can all readily think of indicators of the sorts of “types” whose perceptions of the risks of climate change & guns are likely to be highly convergent, e.g.

Those risks are "politicized" in right-left terms, so we could use "right-left" political outlooks to specify the "types" & do a pretty decent job (a walk or bunt single; hey, it’s spring training!).

We could do even better (stand-up double) if we used the cultural cognition "worldview" scales -- & if we tossed in race & gender as additional indicators (say, by including appropriate cross-product interaction variables in a regression model), we'd be hitting a homerun!

But here’s another interesting thing that Patrick’s query—and the argument I’m guessing was the motivation for his posing it: our perceptions of the packages and the types aren’t always shared, or even when widely held, aren’t always right.

Not that surprising, actually, when you remember that the types can’t be directly observed. It helps too to realize that the source of our apprehension of these matters—the packages, the types—is based on a form of sampling rife with potential biases.  The “data,” as it were, that inform our perceptions are always skewed by the partiality of our social interactions, which reflect our propensity to engage with those who share our outlooks and interests. 

That sort of “selection bias” is a perfectly normal thing; only a lunatic would try to “stratify” his her social interaction to assure “representativeness” in his or her personal observations of how risk perceptions are distributed across types of persons (I suppose one could try applying population weights to all of one's interactions, although that would be time consuming & a nuisance).

But it does mean that we’ll inevitably disagree with our associates now & again—and even when we don’t disagree, all be wrong—about who fears what & why.

E.g., many people think that concern over childhood immunizations is part of one or another risk-perception package held by one or another recognizable “type” of person. 

Some picture them as  part of the package characteristic of the global-warming concerned, nuclear-power fearing tribe of “egalitarians, [who] oppose . . . big corporations and their products.”

When others grope at this particular elephant, they report feeling the “the conservative don’t-tread-on-me crowd that distrusts all government recommendations”—i.e., the same “type” that is skeptical of climate-change and nuclear-power risks.

Well, one or the other could have been right, but it turns out that they are both just plain wrong.

As the CCP report on Vaccine Risk Perceptions and Ad Hoc Risk Communication documents, all the recognizable “types”—whether defined in political or cultural terms—support universal childhood immunization.

The perception that vaccines cause autism is not part of the same risk-perception package as global warming: climate-change skeptics and climate-change believers both overwhelmingly perceive the risks of childhood immunizations to be low and the benefits of them to be high.

The misunderstandings about who is afraid of vaccines and why reflects selection bias in an echo chamber, reinforced by the reciprocal recriminations and expressions of contempt that pervade climate change discourse and that fill members of each with the motivation to see those on the other as harboring all sorts of noxious beliefs and being the source of myriad social ills. (Is this a new thing? Nope.)

So … back to Patrick’s question!

It’s not news—it’s a staple of the public study of risk perceptions and the cultural theory of risk in particular—that perceptions of climate-change and nuclear-power risks are part of a common “package” and are associated with distinctive types.

So my guess is that either Patrick or his friend (the one he was having an argument with; nothing inherently unfriendly about disagreeing!) was taking the position that GM-food risk perceptions was part of that same package as climate & nuclear ones.

Actually, the view that GM foods are “politically polarizing” is a common one.  “Unreasoning, anti-science” stances toward GM foods, according to this view, are for “liberals” what “unreasoning, anti-science” stances toward climate are for “conservatives.

But this is the toxic echo chamber once again.

As the 17.5 billion regular followers of this blog know (welcome, btw, to new readers!), GM foods get a big collective “enh,” at least in the view of the general public.  Most people have never really heard of GM foods, and happily consume humungous helpings of them at every meal.

Advocacy groups of a leftish orientation have been trying to generate concern—trying, moreover, by resort to exactly the “us-vs-them” incitement that is poisoning our science communication environment—but remarkably have been getting absolutely nowhere.

Here in the U.S.; matters are different in Europe. Why there but not here?! These things are truly mysterious—and if you don’t see that, you get a failing grade on the basic curiosity & imagination aptitude test.

Here are some data to illustrate that point and to answer Patrick’s question.

First, look at “packages”: 

Here gun-possession, nuclear, GM-foods, and childhood-vaccine risk perceptions are plotted in relation to climate change risk perceptions (the plotted lines reflect locally weighted regression -- they are "truer" to the raw data than a lnear regression line, reflecting the correlation coefficient I've also reported for each, would be).

Yes, GM food risk perceptions are correlated with global warming ones.  But the effect is very modest. It’s nothing like correlation between guns and climate change or nuclear and climate change.  You’ll find plenty of people—ones without two heads and who don’t think contrails are a government plot—who think climate change is a joke but GM foods a serious threat, and vice versa.

It’s really not part of the “climate change risk perception family.” 

How about in terms of “type”?

Enlarging a bit on some data that I’ve reported before, here are various risk perceptions plotted in relation to conventional left-right political views (measured with a composite scale that combines responses to party-identification and liberal-conservative ideology items):

Pretty clear, I think, that GM foods is just not a left-right issue.

As regular readers know, I’ve also examined GM food risks in relation to other types of “type” indicators, including the cultural cognition worldview scales and “interpretive community” scales derived from environmental risk perceptions.  It just doesn’t connect in a practically meaningful way.

So what to say?

Well, for one thing, there’s certainly no reason for embarrassment in finding out that things aren’t exactly as one conjectured on these matters.

As I said,  “risk packages”—because they reflect unobservable or “latent” dispositions, and because we are constrained to rely on partial and skewed impressions when we observe them—definitely have fuzzy peripheries.

In addition, the packages breed dynamics of misinformation, including the echo chamber effect and strategic behavior by deliberate science-communication environment polluters.

Under these circumstances, we should all adopt a stance of conscious provisionally toward our impressions here.  We shouldn’t “disbelieve” what our senses tell us, but we should expect evidence now & again that we have misperceived—and indeed, seek out such evidence before making decisions of consequence that turn on whether our perceptions are correct.

In the words of a famous scholar of risk perception—I can’t remember his name; early sign of senility?nah, couldn't be!—said (in some other language, but this is rough translation), “trust but verify!”

Maybe it’s just me, but I actually love it when evidence bites me in the ass on something like this.

Not just because I want to be sure the beliefs I hold are free of error, although of course I do feel that way.

But because every time the evidence surprises me I experience anew the sense of wonder at this phenomenon.

What is going on here?!  Why are there packages? Who are the “types”?

Why do some “risks”but not others become entangled in conflicts between diverse groups—all of which are amply stocked with individuals who are high in science comprehension and all of which have intact practices for transmitting what’s collective known to their members?

I really want to know the answers—and I know that I still just don’t!

“Tomorrow,” in fact, I’ll show you something that  is definitely freaking me out

Sunday
Mar092014

If you think GM food & vaccine risk perceptions have any connection to the "climate change risk perception" family, think again

Still another riff on the "GM food risks aren't polarizing" & "there's no cultural conflict over vaccine risks!" themes.

We all know that risk perceptions come in interesting -- indeed, downright mysterious-- packages.  But sometimes we get confused about what exactly is in them.

 

Friday
Mar072014

Q. Where do cultural predispositions come from in the cultural cognition theory? A. They are exogenous -- descriptivey & *normatively*!

A thoughtful friend & corrspondent asks:

The question that you must have been asked many times is, ultimately, how do people obtain their cultural orientations?


If I read between the lines, part of the answer seems to be that these orientations are seeded by the people we associate with and the authorities we seek — perhaps by chance. After that seed is planted, then it becomes a self-reinforcing process: We continue to seek like-minded company and authorities, which strengthens the orientations, and the cycle continues. But there must be more to it than that. Genetics? Some social or cultural adaptive process? I'd like to say something about how we arrive at our cultural orientations.

My response:

I think the model/process you describe is pretty much right. I'd say, though, that the cycle -- the seeking out, the reinforcement -- is not the problem; indeed, it's part of the solution to the puzzle of what makes it possible for people (diverse ones, ones who can't just be told what's what by some central authority) reliably to identify what's collectively known. They immerse themselves in networks of people who they can understand and are motivated to engage and cooperate with, and use their rational faculties to discern inside of those affinity groups who genuinely knows what about what (that is, who knows what's known to science). When this process short circuits & becomes a source of self-reinforcing dissensus, that is a sign not that the process is pathological but that a pathology has infected the process, disabling our normal and normally reliable rational capacity to figure out what's known.

However, we notice the cultural insularity of our process for figuring things out only then & infer "there's a problem w/ the insularity & self-reinforcement!" But that's a kind of selection bias in our attention to such things; we are observing the process only when it is failing in a spectacular way; if we paid attention to the billions of boring cases where diverse people agree, we'd see the same insularity in the process by which diverse people figure things out.  Then we'd properly infer that the problem is not the process but some external condition that corrupts it. At that point, we would focus our reason, guided by the methods of empirical inquiry, to figure out the dynamics of the pathology -- and ultimately to control them...   

You then ask me -- where do these affinities that are the source of the predispositions (the enviroment in which we figure out what's what) come from.  I don't know!  

Or I think likely I more or less know & the answer isn't *that* interesting: we are socialized into them by the accident of who are parents are & where we live.  That's the uninspiring "descriptive" account.  

A more inspiring normative answer (maybe it's just a story? but it has the right moral, morally speaking) is this: we are autonomous, reasoning agents in a free society; it is inevitable that we will we form a plurality of understandings of the best way to live.  That isn't the problem; it's the political way of life to be protected.  So let's take our cultural plurality as given & "solve" the "science communication problem" by removing the influences that conduce to dissensus & polarization, & that disrupt the usual consensus & convergence of free & reasoning citizens on the best (currently) available evidence....'

Some perhaps relevant posts (best I can do, until you help me): 

But I will invite others readers of this blog to comment--likely they can do better!

<

Thursday
Mar062014

Developing effective vaccine-risk communication strategies: *Definitely* measure, but measure what *counts*

Now that the the important Nyhan et al. study on vaccine-risk communications has gotten people's attention on the hazards of empirically uninformed vaccine-risk communication, it's important to reflect on exactly what it means for risk-communication to be genuinely evidence based. Here's a contribution toward the discussion, excerpted from the "Recommendations" section of the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication study. 

5. Reject story-telling alternatives to valid empirical analysis of public perceptions of vaccine risk.
 

Decision science has established a rich stock of mechanisms, from “anchoring” to “availability,” from “probability neglect” to “hyperbolic discounting,” from “overconfidence bias” to “pluralistic ignorance.” Treating them as a collection of story-telling templates, a person of even modest intelligence can easily use these dynamics to fabricate plausible “scientific” “explanations” for any observed social phenomenon (e.g., Brooks 2012). But the narrative coherence of such syntheses furnishes no grounds for crediting them as true. They are at best conjectures—fuel for the empirical-testing engine that alone propels genuine insight (Kahan 2014)—and when not acknowledged as such suggest either the expositor’s ignorance of the difference between story-telling and science or his or her intention to exploit the lack of such understanding on the part of others (Rachlinski 2001).

The case of vaccine-risk perceptions supplies a compelling example of the dangers of treating this genre of writing as a source of reliable guidance for practical decisions. In a compelling proof of the utility of decision science as a grab-bag of prefabricated story-telling templates, numerous commentators, popular and even scholarly, have used the inventory of mechanisms it comprises to “explain” a nonexistent phenomenon—namely, a “growing public distrust” of the safety of vaccinations (e.g., MacDonald, Smith & Appleton 2012).

Risk communication is a critical element of public health policy. It is a mistake for public health officials and professionals to exempt it from their field’s norm of evidence-based practice.

The number of genuine and valid empirical examinations of the general public’s perceptions of childhood vaccines is regrettably smaller than it should be. But both to promote the enlargement of it and to protect public health policy from the potentially deleterious consequences of seeking guidance from faux-empirical substitutes, those committed to conserving the high existing level of public support for universal immunization should base their risk-communication strategies on empirically informed assessments of who fears what and why in the domain of childhood vaccines.

6. Use behavioral measures to assess behavior; use fine-grained, field research & not surveys/polls to understand dynamics of resistance.
 

This study combined an attitudinal survey and an experiment. When administered to a diverse and appropriately recruited sample, attitudinal surveys enable measurement of the impact of affective and group-affinities on societal risk perceptions and information processing. These dynamics are important, because they reflect the quality of the science communication environment in which individuals evaluate risk information relevant to personal and collective decisionmaking.

But as stressed at the outset, survey methods alone are not valid for assessing the impact of vaccine-risk perceptions on the actual decisions of parents to permit their children to be vaccinated. Parents’ self-reports are not a reliable or valid measure of their children’s vaccination status; only behavioral measures akin to those reflected in the NIS are. Accordingly, researchers who use observational methods to investigate variance in vaccination coverage should rely on the NIS or on other valid behavioral measures of vaccination status (Opel et al. 2011b, 2013b).

The study results also suggest two other important limitations on survey methods. First, survey measures are unlikely to support valid inferences about the proportion of the public that holds beliefs or opinions on specific issues relating to vaccines, including the likelihood that vaccines cause autism or other diseases.

Because members of the public often have not formed opinions on or given meaningful attention to specific public policy issues (e.g., stem cell research), it is a mistake in general to treat specifically worded survey items (“Based on what you have read or heard, do you think the federal government should or should not fund federal stem cell research?”) as genuinely measuring positions on those matters (Bishop 2005; Schuman 1998). If such items are reliably measuring anything, it is an expression of a more generally pro- or con-attitude that is evoked by the item (in the case of stem cells, positions on “government spending” or possibly “abortion” or even simple partisan affiliation). What that attitude consists in cannot be reliably analyzed unless responses to the item in question are compared with responses to other items that would help to pin down the latent disposition that they are measuring (Berinsky & Drukman 2007).

The coherence of the responses to the items that made up the PUBLIC_HEALTH scale—and in particular, the high, inverse correlation between the perceived risks of vaccines and the perceived benefits of them—suggest that what those items are measuring is an affective orientation (Slovic 2010) toward childhood vaccines. Under these circumstances, reliable inferences can be drawn from vaccine-risk/benefit items only about the valence of individuals’ affective orientation. But no single item can reliably be treated as revealing anything more specific—or more edifying—than that.

This point was illustrated by responses to the item on “postnatal isoerythrolysis.” Survey participants’ beliefs that childhood vaccination caused this fictional disease—one they necessarily had not heard of before—were highly correlated with their responses to every one of the other diverse risk-benefit items used to form the PUBLIC_HEALTH scale. Rather than reflecting a specific belief formed on the basis of exposure to information on vaccine risks, the affective orientation measured by “vaccine disease risk” items should be interpreted as an emotional predisposition to credit or dismiss propositions conditional on their perceived conformity to one’s orientation (Loewenstein et al. 2001; Slovic et al. 2004).

Researchers might well have good reason to assess public knowledge about specific issues such as the impact that vaccines have on the risk of contracting autism or other diseases. But to do so, they will need to follow the steps necessary to form valid measures of such knowledge. Or in other words, they will need to use the psychometric tools that distinguish scholarly opinion research from popular opinion polling (Bishop 2005).

Second, general-population survey measures cannot be expected to generate insight into the identity or motivations of that portion of the population that is genuinely hostile to childhood vaccination. As the analysis of sources of variation in the PUBLIC_HEALTH scale revealed, none of the familiar cultural styles divided over other societal risks (such as those associated with climate change or nuclear power) has a negative affective orientation toward vaccines. To the extent that they explain any variance at all, these styles are associated only with differences in the intensity of the positive affective orientation toward vaccines that prevails in all these groups. Accordingly, none of the demographic or attitudinal indicators used to identify members of those groups can be expected to identify the characteristics that indicate the presence of whatever group identity might be shared by members of the “anti-vaccine” fringe.

There are without question groups of individuals, some in geographically concentrated areas, who are hostile to childhood vaccines (Mnookin 2011). Who they are and why they feel the way they do are questions that merit serious study. But to answer these questions, researchers will need to use measures that are more fine-grained and discerning than the ones that can profitably be made use of in studying the small class of risk issues on which there is genuine cultural contestation.

Such research is now emerging. In a pair of studies, Opel and his collaborators (2011a, 2011b, 2013b) have devised a “vaccine hesitancy” scale for new parents that predicts delay or avoidance of vaccination behavior. Such a screening device would be comparable to ones used in diverse fields from credit assessment (e.g., Klinger, Khwaja & Lamonte 2013) to organizational staffing (e.g., Ones et al. 2007), not to mention to ones used to predict or diagnose disease vulnerability (e.g., Wilkins et al. 2013). If perfected, it could be used by researchers to guide their investigation of who fears vaccines and why and to focus their testing of risk communication materials.

Resources—financial, intellectual, and social—should be devoted to the extension and refinement of these methods rather than ones that focus on attitudinal correlates of vaccine risk perceptions in more diffuse elements of the general population. In order for vaccine-risk communication to be empirically informed, it is essential not only to measure but to measure what counts.

7. Empirical study should be used to develop appropriately targeted risk communication strategies that are themselves appropriately responsive to empirically identified risk-perception concerns.
 

Anyone who dismisses the existence or seriousness of unfounded fears of childhood vaccines would be behaving foolishly. Skilled journalists and others have vividly documented enclaves of concerted resistance to universal immunization programs. Experienced practitioners furnish credible reports of higher numbers of parents seeking counsel and assurance of vaccine safety. And valid measures of vaccination coverage and childhood disease outbreaks confirm that the incidence of such outbreaks is higher in the enclaves in which vaccine coverage falls dangerously short of the high rates of vaccination prevailing at the national level (Atwell et al. 2013; Glanz et al. 2013; Omer et al. 2008).

At the same time, only someone insufficiently attuned to the insights and methods of the science of science communication would infer that this threat to public health warrants a large-scale, sweeping “education” or “marketing” campaign aimed at parents generally or at the public at large. The potentially negative consequences of such a campaign would not be limited to the waste of furnishing assurances of safety to large numbers of people who are in no need of it. High-profile, emphatic assurances of safety themselves tend to generate concern (Kahan 2013a; Kasperson et al. 1988). A broad scale and indiscriminant campaign to communicate vaccine safety—particularly if understood to be motivated by a general decline in vaccination rates—could also furnish a cue that cooperation with universal immunizations programs is low, potentially undermining reciprocal motivations to contribute to the public good of herd immunity. Lastly, such a campaign would create an advocacy climate ripe for the introduction of cultural partisanship and recrimination of the sort known to disable citizens’ capacity to recognize valid decision-relevant science generally (Bolsen & Druckman 2013; Kahan 2012), and valid science relevant to vaccines in particular (Gollust, Dempsey, Lantz, Ubel & Fowler 2010; Kahan, Braman, Cohen, Gastil & Slovic 2010).

The right response to dynamics productive of excess concern over risk is empirically informed risk communication strategies tailored to those specific dynamics. Relevant dynamics in this setting include not only those that motivate enclaves of resistance to universal immunization but also those that figure in the concerns of individual parents seeking counsel, as they ought to, from their families’ pediatricians. Risk communication strategies specifically responsive to those dynamics should be formulated (e.g., NCIRS 2013)—and they should be tested, both in the course of their development and in their administration (Shourie et al. 2013), so that those engaged in carrying them out can be confident that they are taking steps that are likely to work and can calibrate their approach as they learn more (Sadaf et al. 2013; Opel et al. 2012).

Again, preliminary research of this sort has commended. Perfection of behavioral-prediction profiles of the sort featured in Opel et al. (2011a, 2011b, 2013b) would not only enable researchers to extend understanding of the sources and consequences of genuine vaccine hesitancy but also to test focused risk-communication strategies on appropriate message recipients.  If made sufficiently precise, screening protocols of this sort would also enable practitioners to accurately identify parents in need of counseling, and public health officials to identify regions where the extent of hesitancy warrants intervention.

The public health establishment should exercise leadership to make health professionals and other concerned individuals and groups appreciate the distinction between targeted strategies of this sort and the ad hoc forms of risk communication that were the focus of this study.  They should help such groups understand in addition that support for the former does not justify either encouragement or tolerance of the latter. 

Refs

Atwell, J.E., et al. Nonmedical Vaccine Exemptions and Pertussis in California, 2010. Pediatrics 132, 624-630 (2013).

Berinsky, A.J. & Druckman, J.N. The Polls—Review: Public Opinion Research and Support for the Iraq War. Public Opin Quart 71, 126-141 (2007).

Bishop, G.F. The Illusion of Public Opinion : Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, Lanham, MD, 2005).

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series (2013). Available at http://www.ipr.northwestern.edu/publications/papers/2013/ipr-wp-13-11.html

Bowles, S. & Gintis, H. A cooperative species : Human reciprocity and its evolution (Princeton University Press, Princeton, 2013).

Brooks, D. The Social Animal : The Hidden Sources of Love, Character, and Achievement (Random House Trade Paperbacks, New York, 2012).

Glanz, J.M., et al. A Population-Based Cohort Study of Undervaccination in 8 Managed Care Organizations across the United States Managed Care Organizations. JAMA pediatrics 167, 274-281 (2013).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy Undermines Support for State Mandates on the Human Papillomavirus Vaccine. Health Affair 29, 2041-2046 (2010).

Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013a).

Kasperson, R.E., et al. The Social Amplification of Risk: A Conceptual Framework. Risk Analysis 8, 177-187 (1988).

Klinger, B., Khwaja, A. & LaMonte, J. Improving credit risk analysis with psychometrics in Peru. (Inter-American Development Bank, 2013). Available a0074

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).

MacDonald, N.E., Smith, J. & Appleton, M. Risk Perception, Risk Management and Safety Assessment: What Can Governments Do to Increase Public Confidence in Their Vaccine System? Biologicals 40, 384-388 (2012).

Mnookin, S. The Panic Virus : A True Story of Medicine, Science, and Fear (Simon & Schuster, New York, 2011).

NCIRS, MMR Decision Aid (2013). Available at http://www.ncirs.edu.au/immunisation/education/mmr-decision/index.php.

Omer, S.B., et al. Geographic Clustering of Nonmedical Exemptions to School Immunization Requirements and Associations with Geographic Clustering of Pertussis. American Journal of Epidemiology 168, 1389-1396 (2008).

Ones, D.S., Dilchert, S., Viswesvaran, C. & Judge, T.A. In support of personality assessment in organizational settings. Personnel Psychology 60, 995-1027 (2007)

Opel, D.J., et al. Characterizing Providers’ Immunization Communication Practices During Health Supervision Visits with Vaccine-Hesitant Parents: A Pilot Study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., et al. Characterizing Providers’ Immunization Communication Practices During Health Supervision Visits with Vaccine-Hesitant Parents: A Pilot Study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., et al. Development of a survey to identify vaccine-hesitant parents: The parent attitudes about childhood vaccines survey. Human Vaccines 7, 419-425 (2011a).

Opel, D.J., et al. The Relationship between Parent Attitudes About Childhood Vaccines Survey Scores and Future Child Immunization Status: A Validation Study. JAMA pediatrics 167, 1065-1071 (2013b).

Opel, D.J., et al. Validity and reliability of a survey to identify vaccine-hesitant parents. Vaccine 29, 6598-6605 (2011b).

Otto, S. Antiscience Beliefs Jeopardize U.S. Democracy. in Scientific American (Oct. 16, 2012a), available at http://www.scientificamerican.com/article.cfm?id=antiscience-beliefs-jeopardize-us-democracy.

Otto, S.L. One Way to Help Science: Become Republican. Nature Medicine 18, 17 (2012b).

Rachlinski, J.J. Comment: Is Evolutionary Analysis of Law Science or Storytelling. Jurimetrics 41, 365-370 (2001).

Sadaf, A., Richards, J.L., Glanz, J., Salmon, D.A. & Omer, S.B. A Systematic Review of Interventions for Reducing Parental Vaccine Refusal and Vaccine Hesitancy. Vaccine 31, 4293-4304 (2013).

Shourie, S., Jackson, C., Cheater, F., Bekker, H., Edlin, R., Tubeuf, S., Harrison, W., McAleese, E., Schweiger, M. & Bleasby, B. A cluster randomised controlled trial of a web based decision aid to support parents’ decisions about their child's Measles Mumps and Rubella (MMR) vaccination. Vaccine 31, 6003-6010 (2013).

Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).

Wilkins, C.H., Roe, C.M., Morris, J.C. & Galvin, J.E. Mild physical impairment predicts future diagnosis of dementia of the Alzheimer’s type. Journal of the American Geriatrics Society 61, 1055-1059 (2013).

Tuesday
Mar042014

A nice empirical study of vaccine risk communication--and an unfortunate, empirically uninformed reaction to it

Pediatrics published (in “advance on-line” form) an important study yesterday on the effect of childhood-vaccine risk communication. 

The study was conducted by a team of researchers including Brendan Nyhan and Jason Reifler, both of whom have done excellent studies on public-health risk communication in the past

NR et al. conducted an experiment in which they showed a large sample of U.S. parents with children age 17 or under communications on the risks and benefits of childhood vaccinations.  

Exposure to the communications, they report, produced one or another perverse effect, including greater concern over vaccine risks and, among a segment of respondents with negative attitudes toward vaccines, a lower self-reported intent to vaccinate any “future child” for MMR (mumps, measles, rubella).

The media/internet reacted with considerable alarm: “Parents Less Likely to Vaccinate Kids After Hearing Government’s Safety Assurance”; “Trying To Convince Parents To Vaccinate Their Kids Just Makes The Problem Worse”; “Pro-vaccination efforts, debunking autism myths may be scaring wary parents from shots”. Etc.

Actually, I think this a serious misinterpretation of NR et al.

The study does furnish reason for concern. 

But what we should be anxious about, the NR et al. experiment shows, is precisely the simplistic, empirically uninformed style of risk communication that many (not all!) of the media reports on the study reflect.

To appreciate the significance of the study, it’s useful to start with the distressing lack of connection between fact, on the one hand, and the sort of representations that media and internet commentators constantly make about the public’s attitude toward childhood immunizations, on the other.

The message of these ad hoc risk communicators consists of a collection of dire (also trite & formulaic) pronouncements: a  “growing crisis of public confidence—an “epidemic of fear,” among a  large and growing number” of “otherwise mainstream parents”—has generated  an “erosion in immunization rates,leading, “predictably to the resurgence of diseases considered vanquished long ago. From Taliban fighters to California soccer moms, those who choose not to vaccinate their children against preventable diseases are causing a public health crisis.”

According to the best available evidence, as collected and interpreted by the nation’s most authoritative public health experts, this story is simply false.

Childhood vaccine rates are not “eroding” in the U.S. 

Coverage for MMR, for pertussis (“whooping cough”), for polio, for hepatitis-b—all have been over 90%, the national public health target, for over a decade.  The percentage of children whose parents refuse to permit them to receive any of the recommended childhood vaccines has remained under 1% during this time.

Every year, with the release of the latest results of the National Immunization Survey, the CDC issues a press release to announce the “reassuring” news that childhood immunization rates either “remain high” or are “increasing.” “ ‘Nearly all parents are choosing to have their children protected against dangerous childhood diseases,’ ” the officials announce.

There’s definitely been a spike in whooping cough cases in recent years. 

But “[p]arents refusing to get their children vaccinated,” according to the CDC, are “not the driving force behind the[se] large scale outbreaks.” In addition to “increased awareness, improved diagnostic tests, better reporting, [and] more circulation of the bacteria,” the CDC has identified “waning immunity “from an ineffective booster shot as one of the principal causes.”

Measles have deemed eliminated in the United States but can be introduced into U.S. communities by individuals infected during travel abroad.  

Fortunately, “[h]igh MMR vaccine coverage in the United States (91% among children aged 19–35 months),” the CDC states, “limits the size of [such] outbreaks.” “[D]uring 2001–2012, the median annual number of measles cases reported in the United States was 60 (range: 37–220).”

The “public health crisis” theme that pervades U.S. media and internet commentary dates to the 1998 publication in the British medical journal Lancet of a bogus and since-retracted study that purported to find a link between the MMR vaccine and autism.

The study initiated a genuine panic, and a demonstrable decline in vaccine rates, in the U.K.

Public health officials were eager to head off the same in the U.S., and advocacy groups and the media were—appropriately!—eager to pitch in to help.

Fortunately, the flap over the bogus study had no effect on U.S. vaccination rates, which have historically been very high, or on the attitudes of the general public, which have always been and remain overwhelmingly positive toward universal immunization.

But through an echo-chamber effect, the “public health crisis” warning bells have continued to clang—all the louder, in fact, over time.

One might think—likely some of those who are continuing to sound this alarm do—that the persistent “red alert” status can’t really do any harm.

But that’s where the public-health risk of not having a coordinated, empirically informed, evidence-based system of risk communication comes in.

It’s a well established finding in the empirical study of public risk perceptions that emphatically reassuring people that a technology poses no serious risk in fact amplifies concern

How other people in their situation are reacting is an important cue that ordinary members of the public rely on to gauge risk.  The message “many people like you” are afraid thus excites apprehension, even if the message is embodied in an admonition that there’s nothing to worry about.

This anxiety-amplification effect doesn’t mean that one shouldn’t try to reassure genuinely worried people when their concerns are in fact not well founded, because in that case the benefits of accurate risk information, if communicated effectively, will hopefully outweigh any marginal increase in apprehension, which is likely to be small if people are already afraid.

But the anxiety-amplification effect of risk reassurance does mean that it is a mistake to misleadingly communicate to unworried people that people in their situation— a  large and growing number” of “otherwise mainstream parents”; “California soccer moms” (etc. etc., blah blah)—are worried when they aren’t!  In that situation, the message “all of you foolish people are needlessly worried—JUST CALM DOWN!” generates real risk of inducing fear without creating any benefit.

The excellent NR et al. study furnishes evidence to be concerned that ad hoc, empirically uninformed vaccine-risk communication could have exactly this effect.

The NR et al. featured a variety of “risk-benefit” communications.  One was  a fairly straightforward report that rebutted the claim that vaccines cause autism. Two others stressed the health benefits of vaccination, one in fairly analytic terms and the others in a vivid narrative in which a parent described the terrifying consequences when her unvaccinated child contracted measles.

The result?

Consistent with the anxiety-amplification effect, subjects who received the vivid narrative communication became more concerned about the side effects of getting the MMR vaccine.

The impact of the blander communication that refuted the MMR-autism link was mixed.

Overall, the subjects in that condition were in fact less likely to agree that vaccines cause autism than parents in a control condition.

They were no less likely than parents in the control to believe that the MMR vaccine has “serious side effects.”  But they weren’t any more likely to believe that either.

The MMR-autism refutation communication did have a perverse effect on one set of subjects, however.

NR et al. measured the study participants’ “vaccine attitudes” with a scale that assessed their agreement or disagreement with items relating to the risks and benefits of vaccines (e.g., “I am concerned about serious adverse effects of vaccines”).  The majority of parents expressed positive attitudes.

But among those who held the most negative attitudes, the self-reported intention to vaccinate any “future child” for MMR was actually lower in the group exposed to the communication that refuted the MMR-autism link than it was among their counterparts in the control condition.

What should we make of this?

I don’t think it would be correct to infer that from the experiment that vaccine-safety “education” will always “backfire” or that trying to “assure” anxious parents will make them “less likely to vaccinate” their children.

In fact, that interpretation would itself be empirically uninformed.

For one thing, NR et al. used “self-report” measures, which are well known not to be valid indicators of vaccination behavior.  Indeed, parents’ responses to survey questions grossly overstate the extent to which their children are not immunized.

Great work is being done to develop a behaviorally validated attitudinal screening instrument for identifying parents who are genuinely likely not to vaccinate their children. 

But that research itself confirms that many, many more parents say “yes” when asked if they are concerned that vaccines might have “serious side effects”—the sort of item featured NR et al. scale—than refrain from vaccinating their children.

What’s more, the NR et al. sample was not genuinely tailored to parents who have children in the age range for the MMR vaccine. 

That first MMR dose is administered at one year of age, and the second before age 4 or 5. 

The NR et al. parents had children “17 or younger.”

The mean age of the study respondents is not reported, but 80% were over 30, and 40% over 40.  So no doubt many were past the stage in life where they’d be making decisions about whether any “future” child should get the MMR vaccine.

What are survey respondents who aren’t genuinely reflecting on whether to vaccinate their children telling us when they say they “won’t”?

This is a question that CCP’s recent Vaccine Risk and Ad Hoc Risk Communication Study helps to answer.

When scales like the one featured in NR et al. are administered to members of the general public, they measure a more generic affective attitude toward vaccination.

The vast majority of the U.S. public has a very positive affective orientation toward vaccines

An experiment like the one NR et al. conducted is instructive on how risk communication might influence that sort of general affective orientation. And what their experiment found is that there’s good reason to be concerned that the dominant, ad hoc empirically uninformed style of risk communication (on display in coverage of their study) can in fact adversely affect that attitude.

That finding is consistent with the ones reported in the CCP study, which found that stories emphasizing the “public health crisis” trope cause people to grossly overestimate the extent to which parents in the U.S. are resisting vaccination of their children.

The CCP study also found that the equation of “vaccine hesitancy” with disbelief in evolution and skepticism about climate change—another popular trope—can create cultural polarization over vaccine safety among diverse people who otherwise all agree that vaccine benefits are high and their risks low.

That finding is closely related, I suspect, to the perverse effect that NR et al. experiment produced in the self-reported “intent to vaccinate” response of the small group of respondents in their sample who had a negative attitude toward vaccines.

The dynamic of motivated reasoning predicts that individuals will “push back” when presented information that challenges an identity-defining belief. 

There aren’t many individuals in U.S. society whose identity includes hostility to universal vaccination—they are an outlier in every recognizable cultural group.

But it’s not surprising that they would express that belief with all the more vehemence when shown information asserting that vaccines are safe and effective and they immediately asked whether they’d vaccinate “future children”

The NR et al. study is superbly well done and very important.

But the lesson it teaches is not that it is “futile” to try to communicate with concerned parents.

It’s that it is a bad idea to flood public discourse in a blunderbuss fashion with communications that state or imply that there is a “growing crisis of confidence” in vaccines that is “eroding” immunization rates.

It’s a good idea instead to use valid empirical means to formulate targeted and effective vaccine-safety communication strategies.

As indicated, there is in fact an effort underway to develop behaviorally validated measures for identifying parents who are most at risk of vaccine hesitancy (who make up a much smaller portion of the already relatively small portion of the population who express a “negative attitude” toward vaccines when responding to public opinion survey measures). With that sort of measure in hand, researchers test counseling strategies (ones informed, of course, by existing research on what works in comparable areas) aimed at precisely at the parents who would benefit from information.

The public health establishment needs to make clear that that sort of research merits continued, and expanded support.

In addition, the public health establishment needs to play a leadership role in creating a shared cultural understanding—among journalists, advocates, and individual health professionals—that risk communication, like all other elements of public-health policy, must be empirically informed.

The NR et al. study furnishes an inspiring glimpse of how much value can be obtained from evidence-based methods of risk communication.

The reaction to the study underscores how much risk we face if we continue to rely on an ad hoc, evidence-free style of risk communication instead.

Page 1 ... 2 3 4 5 6 ... 23 Next 20 Entries »