follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Developing effective vaccine-risk communication strategies: *Definitely* measure, but measure what *counts* | Main | Geoengineering the science communication environment: the cultural plasticity of climate change risks part II »

A nice empirical study of vaccine risk communication--and an unfortunate, empirically uninformed reaction to it

Pediatrics published (in “advance on-line” form) an important study yesterday on the effect of childhood-vaccine risk communication. 

The study was conducted by a team of researchers including Brendan Nyhan and Jason Reifler, both of whom have done excellent studies on public-health risk communication in the past

NR et al. conducted an experiment in which they showed a large sample of U.S. parents with children age 17 or under communications on the risks and benefits of childhood vaccinations.  

Exposure to the communications, they report, produced one or another perverse effect, including greater concern over vaccine risks and, among a segment of respondents with negative attitudes toward vaccines, a lower self-reported intent to vaccinate any “future child” for MMR (mumps, measles, rubella).

The media/internet reacted with considerable alarm: “Parents Less Likely to Vaccinate Kids After Hearing Government’s Safety Assurance”; “Trying To Convince Parents To Vaccinate Their Kids Just Makes The Problem Worse”; “Pro-vaccination efforts, debunking autism myths may be scaring wary parents from shots”. Etc.

Actually, I think this a serious misinterpretation of NR et al.

The study does furnish reason for concern. 

But what we should be anxious about, the NR et al. experiment shows, is precisely the simplistic, empirically uninformed style of risk communication that many (not all!) of the media reports on the study reflect.

To appreciate the significance of the study, it’s useful to start with the distressing lack of connection between fact, on the one hand, and the sort of representations that media and internet commentators constantly make about the public’s attitude toward childhood immunizations, on the other.

The message of these ad hoc risk communicators consists of a collection of dire (also trite & formulaic) pronouncements: a  “growing crisis of public confidence—an “epidemic of fear,” among a  large and growing number” of “otherwise mainstream parents”—has generated  an “erosion in immunization rates,leading, “predictably to the resurgence of diseases considered vanquished long ago. From Taliban fighters to California soccer moms, those who choose not to vaccinate their children against preventable diseases are causing a public health crisis.”

According to the best available evidence, as collected and interpreted by the nation’s most authoritative public health experts, this story is simply false.

Childhood vaccine rates are not “eroding” in the U.S. 

Coverage for MMR, for pertussis (“whooping cough”), for polio, for hepatitis-b—all have been over 90%, the national public health target, for over a decade.  The percentage of children whose parents refuse to permit them to receive any of the recommended childhood vaccines has remained under 1% during this time.

Every year, with the release of the latest results of the National Immunization Survey, the CDC issues a press release to announce the “reassuring” news that childhood immunization rates either “remain high” or are “increasing.” “ ‘Nearly all parents are choosing to have their children protected against dangerous childhood diseases,’ ” the officials announce.

There’s definitely been a spike in whooping cough cases in recent years. 

But “[p]arents refusing to get their children vaccinated,” according to the CDC, are “not the driving force behind the[se] large scale outbreaks.” In addition to “increased awareness, improved diagnostic tests, better reporting, [and] more circulation of the bacteria,” the CDC has identified “waning immunity “from an ineffective booster shot as one of the principal causes.”

Measles have deemed eliminated in the United States but can be introduced into U.S. communities by individuals infected during travel abroad.  

Fortunately, “[h]igh MMR vaccine coverage in the United States (91% among children aged 19–35 months),” the CDC states, “limits the size of [such] outbreaks.” “[D]uring 2001–2012, the median annual number of measles cases reported in the United States was 60 (range: 37–220).”

The “public health crisis” theme that pervades U.S. media and internet commentary dates to the 1998 publication in the British medical journal Lancet of a bogus and since-retracted study that purported to find a link between the MMR vaccine and autism.

The study initiated a genuine panic, and a demonstrable decline in vaccine rates, in the U.K.

Public health officials were eager to head off the same in the U.S., and advocacy groups and the media were—appropriately!—eager to pitch in to help.

Fortunately, the flap over the bogus study had no effect on U.S. vaccination rates, which have historically been very high, or on the attitudes of the general public, which have always been and remain overwhelmingly positive toward universal immunization.

But through an echo-chamber effect, the “public health crisis” warning bells have continued to clang—all the louder, in fact, over time.

One might think—likely some of those who are continuing to sound this alarm do—that the persistent “red alert” status can’t really do any harm.

But that’s where the public-health risk of not having a coordinated, empirically informed, evidence-based system of risk communication comes in.

It’s a well established finding in the empirical study of public risk perceptions that emphatically reassuring people that a technology poses no serious risk in fact amplifies concern

How other people in their situation are reacting is an important cue that ordinary members of the public rely on to gauge risk.  The message “many people like you” are afraid thus excites apprehension, even if the message is embodied in an admonition that there’s nothing to worry about.

This anxiety-amplification effect doesn’t mean that one shouldn’t try to reassure genuinely worried people when their concerns are in fact not well founded, because in that case the benefits of accurate risk information, if communicated effectively, will hopefully outweigh any marginal increase in apprehension, which is likely to be small if people are already afraid.

But the anxiety-amplification effect of risk reassurance does mean that it is a mistake to misleadingly communicate to unworried people that people in their situation— a  large and growing number” of “otherwise mainstream parents”; “California soccer moms” (etc. etc., blah blah)—are worried when they aren’t!  In that situation, the message “all of you foolish people are needlessly worried—JUST CALM DOWN!” generates real risk of inducing fear without creating any benefit.

The excellent NR et al. study furnishes evidence to be concerned that ad hoc, empirically uninformed vaccine-risk communication could have exactly this effect.

The NR et al. featured a variety of “risk-benefit” communications.  One was  a fairly straightforward report that rebutted the claim that vaccines cause autism. Two others stressed the health benefits of vaccination, one in fairly analytic terms and the others in a vivid narrative in which a parent described the terrifying consequences when her unvaccinated child contracted measles.

The result?

Consistent with the anxiety-amplification effect, subjects who received the vivid narrative communication became more concerned about the side effects of getting the MMR vaccine.

The impact of the blander communication that refuted the MMR-autism link was mixed.

Overall, the subjects in that condition were in fact less likely to agree that vaccines cause autism than parents in a control condition.

They were no less likely than parents in the control to believe that the MMR vaccine has “serious side effects.”  But they weren’t any more likely to believe that either.

The MMR-autism refutation communication did have a perverse effect on one set of subjects, however.

NR et al. measured the study participants’ “vaccine attitudes” with a scale that assessed their agreement or disagreement with items relating to the risks and benefits of vaccines (e.g., “I am concerned about serious adverse effects of vaccines”).  The majority of parents expressed positive attitudes.

But among those who held the most negative attitudes, the self-reported intention to vaccinate any “future child” for MMR was actually lower in the group exposed to the communication that refuted the MMR-autism link than it was among their counterparts in the control condition.

What should we make of this?

I don’t think it would be correct to infer that from the experiment that vaccine-safety “education” will always “backfire” or that trying to “assure” anxious parents will make them “less likely to vaccinate” their children.

In fact, that interpretation would itself be empirically uninformed.

For one thing, NR et al. used “self-report” measures, which are well known not to be valid indicators of vaccination behavior.  Indeed, parents’ responses to survey questions grossly overstate the extent to which their children are not immunized.

Great work is being done to develop a behaviorally validated attitudinal screening instrument for identifying parents who are genuinely likely not to vaccinate their children. 

But that research itself confirms that many, many more parents say “yes” when asked if they are concerned that vaccines might have “serious side effects”—the sort of item featured NR et al. scale—than refrain from vaccinating their children.

What’s more, the NR et al. sample was not genuinely tailored to parents who have children in the age range for the MMR vaccine. 

That first MMR dose is administered at one year of age, and the second before age 4 or 5. 

The NR et al. parents had children “17 or younger.”

The mean age of the study respondents is not reported, but 80% were over 30, and 40% over 40.  So no doubt many were past the stage in life where they’d be making decisions about whether any “future” child should get the MMR vaccine.

What are survey respondents who aren’t genuinely reflecting on whether to vaccinate their children telling us when they say they “won’t”?

This is a question that CCP’s recent Vaccine Risk and Ad Hoc Risk Communication Study helps to answer.

When scales like the one featured in NR et al. are administered to members of the general public, they measure a more generic affective attitude toward vaccination.

The vast majority of the U.S. public has a very positive affective orientation toward vaccines

An experiment like the one NR et al. conducted is instructive on how risk communication might influence that sort of general affective orientation. And what their experiment found is that there’s good reason to be concerned that the dominant, ad hoc empirically uninformed style of risk communication (on display in coverage of their study) can in fact adversely affect that attitude.

That finding is consistent with the ones reported in the CCP study, which found that stories emphasizing the “public health crisis” trope cause people to grossly overestimate the extent to which parents in the U.S. are resisting vaccination of their children.

The CCP study also found that the equation of “vaccine hesitancy” with disbelief in evolution and skepticism about climate change—another popular trope—can create cultural polarization over vaccine safety among diverse people who otherwise all agree that vaccine benefits are high and their risks low.

That finding is closely related, I suspect, to the perverse effect that NR et al. experiment produced in the self-reported “intent to vaccinate” response of the small group of respondents in their sample who had a negative attitude toward vaccines.

The dynamic of motivated reasoning predicts that individuals will “push back” when presented information that challenges an identity-defining belief. 

There aren’t many individuals in U.S. society whose identity includes hostility to universal vaccination—they are an outlier in every recognizable cultural group.

But it’s not surprising that they would express that belief with all the more vehemence when shown information asserting that vaccines are safe and effective and they immediately asked whether they’d vaccinate “future children”

The NR et al. study is superbly well done and very important.

But the lesson it teaches is not that it is “futile” to try to communicate with concerned parents.

It’s that it is a bad idea to flood public discourse in a blunderbuss fashion with communications that state or imply that there is a “growing crisis of confidence” in vaccines that is “eroding” immunization rates.

It’s a good idea instead to use valid empirical means to formulate targeted and effective vaccine-safety communication strategies.

As indicated, there is in fact an effort underway to develop behaviorally validated measures for identifying parents who are most at risk of vaccine hesitancy (who make up a much smaller portion of the already relatively small portion of the population who express a “negative attitude” toward vaccines when responding to public opinion survey measures). With that sort of measure in hand, researchers test counseling strategies (ones informed, of course, by existing research on what works in comparable areas) aimed at precisely at the parents who would benefit from information.

The public health establishment needs to make clear that that sort of research merits continued, and expanded support.

In addition, the public health establishment needs to play a leadership role in creating a shared cultural understanding—among journalists, advocates, and individual health professionals—that risk communication, like all other elements of public-health policy, must be empirically informed.

The NR et al. study furnishes an inspiring glimpse of how much value can be obtained from evidence-based methods of risk communication.

The reaction to the study underscores how much risk we face if we continue to rely on an ad hoc, evidence-free style of risk communication instead.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (5)

Thank you for this. Sadly, the sort of fear-inducing headlines that you cite are designed to drive page-views, and will be difficult to eradicate.

March 6, 2014 | Unregistered CommenterJulia


I'm not as pessimistic -- or at least 50% of the time I'm not; myabe the same for you-- as your comment seems to be.

At least some of the problem here is that journalists who are trying, very appropriately, to contribute to public good are acting on the basis of misunderstanding of fact. That misunderstanding exists & persists b/c of absence in our public health culture -- and in our science-informed public policymaking culture generally -- of norms, practices, procedures that reflect importance of acquiring &using empirical evidence to guide transmission of decision-relevant science to those whose welfare can be benefited by it.


March 8, 2014 | Unregistered Commenterdmk38

Well, I'm a pediatrician in Portland, so I'm running into amazing levels of misinformation on a weekly if not daily basis. That probably colors my perception (and pessimism).

March 8, 2014 | Unregistered CommenterJulia


Ah, I see.

Well, you deserve effective, evidence-based guidance on how to help concerned parents make sense of information that confuses or frightens, whatever its source.

I feel 100% optimistic that you will get that -- there are some very talented & dedicated researchers in this area!

March 8, 2014 | Unregistered Commenterdmk38

Omitted from this analysis is the context of government mendacity that justifies public skepticism of any reassuring pronouncement that they hear. We have been reassured about Fukushima and GMO crops. In previous generations, we were reassured about the safety of asbestos and cigarettes. But the FDA is far and away the worst offender, with no ability to resist the profit hunger of Big Pharma. Vioxx and HRT were only the most famous cases. So many of our pharmaceutical products are prescribed because they alleviate symptoms in the short run, and we have no idea what their metabolic effect is in the long run.

Many consumers may not be able to compute the standard deviation of a statistical sample, but they are smart enough to detect when an article seems a little too eager to reassure them, and suspect that someone's profit is being put ahead of their safety.

March 10, 2014 | Unregistered CommenterJosh Mitteldorf

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>