follow CCP

Recent blog entries
Thursday
Mar062014

Developing effective vaccine-risk communication strategies: *Definitely* measure, but measure what *counts*

Now that the the important Nyhan et al. study on vaccine-risk communications has gotten people's attention on the hazards of empirically uninformed vaccine-risk communication, it's important to reflect on exactly what it means for risk-communication to be genuinely evidence based. Here's a contribution toward the discussion, excerpted from the "Recommendations" section of the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication study. 

5. Reject story-telling alternatives to valid empirical analysis of public perceptions of vaccine risk.
 

Decision science has established a rich stock of mechanisms, from “anchoring” to “availability,” from “probability neglect” to “hyperbolic discounting,” from “overconfidence bias” to “pluralistic ignorance.” Treating them as a collection of story-telling templates, a person of even modest intelligence can easily use these dynamics to fabricate plausible “scientific” “explanations” for any observed social phenomenon (e.g., Brooks 2012). But the narrative coherence of such syntheses furnishes no grounds for crediting them as true. They are at best conjectures—fuel for the empirical-testing engine that alone propels genuine insight (Kahan 2014)—and when not acknowledged as such suggest either the expositor’s ignorance of the difference between story-telling and science or his or her intention to exploit the lack of such understanding on the part of others (Rachlinski 2001).

The case of vaccine-risk perceptions supplies a compelling example of the dangers of treating this genre of writing as a source of reliable guidance for practical decisions. In a compelling proof of the utility of decision science as a grab-bag of prefabricated story-telling templates, numerous commentators, popular and even scholarly, have used the inventory of mechanisms it comprises to “explain” a nonexistent phenomenon—namely, a “growing public distrust” of the safety of vaccinations (e.g., MacDonald, Smith & Appleton 2012).

Risk communication is a critical element of public health policy. It is a mistake for public health officials and professionals to exempt it from their field’s norm of evidence-based practice.

The number of genuine and valid empirical examinations of the general public’s perceptions of childhood vaccines is regrettably smaller than it should be. But both to promote the enlargement of it and to protect public health policy from the potentially deleterious consequences of seeking guidance from faux-empirical substitutes, those committed to conserving the high existing level of public support for universal immunization should base their risk-communication strategies on empirically informed assessments of who fears what and why in the domain of childhood vaccines.

6. Use behavioral measures to assess behavior; use fine-grained, field research & not surveys/polls to understand dynamics of resistance.
 

This study combined an attitudinal survey and an experiment. When administered to a diverse and appropriately recruited sample, attitudinal surveys enable measurement of the impact of affective and group-affinities on societal risk perceptions and information processing. These dynamics are important, because they reflect the quality of the science communication environment in which individuals evaluate risk information relevant to personal and collective decisionmaking.

But as stressed at the outset, survey methods alone are not valid for assessing the impact of vaccine-risk perceptions on the actual decisions of parents to permit their children to be vaccinated. Parents’ self-reports are not a reliable or valid measure of their children’s vaccination status; only behavioral measures akin to those reflected in the NIS are. Accordingly, researchers who use observational methods to investigate variance in vaccination coverage should rely on the NIS or on other valid behavioral measures of vaccination status (Opel et al. 2011b, 2013b).

The study results also suggest two other important limitations on survey methods. First, survey measures are unlikely to support valid inferences about the proportion of the public that holds beliefs or opinions on specific issues relating to vaccines, including the likelihood that vaccines cause autism or other diseases.

Because members of the public often have not formed opinions on or given meaningful attention to specific public policy issues (e.g., stem cell research), it is a mistake in general to treat specifically worded survey items (“Based on what you have read or heard, do you think the federal government should or should not fund federal stem cell research?”) as genuinely measuring positions on those matters (Bishop 2005; Schuman 1998). If such items are reliably measuring anything, it is an expression of a more generally pro- or con-attitude that is evoked by the item (in the case of stem cells, positions on “government spending” or possibly “abortion” or even simple partisan affiliation). What that attitude consists in cannot be reliably analyzed unless responses to the item in question are compared with responses to other items that would help to pin down the latent disposition that they are measuring (Berinsky & Drukman 2007).

The coherence of the responses to the items that made up the PUBLIC_HEALTH scale—and in particular, the high, inverse correlation between the perceived risks of vaccines and the perceived benefits of them—suggest that what those items are measuring is an affective orientation (Slovic 2010) toward childhood vaccines. Under these circumstances, reliable inferences can be drawn from vaccine-risk/benefit items only about the valence of individuals’ affective orientation. But no single item can reliably be treated as revealing anything more specific—or more edifying—than that.

This point was illustrated by responses to the item on “postnatal isoerythrolysis.” Survey participants’ beliefs that childhood vaccination caused this fictional disease—one they necessarily had not heard of before—were highly correlated with their responses to every one of the other diverse risk-benefit items used to form the PUBLIC_HEALTH scale. Rather than reflecting a specific belief formed on the basis of exposure to information on vaccine risks, the affective orientation measured by “vaccine disease risk” items should be interpreted as an emotional predisposition to credit or dismiss propositions conditional on their perceived conformity to one’s orientation (Loewenstein et al. 2001; Slovic et al. 2004).

Researchers might well have good reason to assess public knowledge about specific issues such as the impact that vaccines have on the risk of contracting autism or other diseases. But to do so, they will need to follow the steps necessary to form valid measures of such knowledge. Or in other words, they will need to use the psychometric tools that distinguish scholarly opinion research from popular opinion polling (Bishop 2005).

Second, general-population survey measures cannot be expected to generate insight into the identity or motivations of that portion of the population that is genuinely hostile to childhood vaccination. As the analysis of sources of variation in the PUBLIC_HEALTH scale revealed, none of the familiar cultural styles divided over other societal risks (such as those associated with climate change or nuclear power) has a negative affective orientation toward vaccines. To the extent that they explain any variance at all, these styles are associated only with differences in the intensity of the positive affective orientation toward vaccines that prevails in all these groups. Accordingly, none of the demographic or attitudinal indicators used to identify members of those groups can be expected to identify the characteristics that indicate the presence of whatever group identity might be shared by members of the “anti-vaccine” fringe.

There are without question groups of individuals, some in geographically concentrated areas, who are hostile to childhood vaccines (Mnookin 2011). Who they are and why they feel the way they do are questions that merit serious study. But to answer these questions, researchers will need to use measures that are more fine-grained and discerning than the ones that can profitably be made use of in studying the small class of risk issues on which there is genuine cultural contestation.

Such research is now emerging. In a pair of studies, Opel and his collaborators (2011a, 2011b, 2013b) have devised a “vaccine hesitancy” scale for new parents that predicts delay or avoidance of vaccination behavior. Such a screening device would be comparable to ones used in diverse fields from credit assessment (e.g., Klinger, Khwaja & Lamonte 2013) to organizational staffing (e.g., Ones et al. 2007), not to mention to ones used to predict or diagnose disease vulnerability (e.g., Wilkins et al. 2013). If perfected, it could be used by researchers to guide their investigation of who fears vaccines and why and to focus their testing of risk communication materials.

Resources—financial, intellectual, and social—should be devoted to the extension and refinement of these methods rather than ones that focus on attitudinal correlates of vaccine risk perceptions in more diffuse elements of the general population. In order for vaccine-risk communication to be empirically informed, it is essential not only to measure but to measure what counts.

7. Empirical study should be used to develop appropriately targeted risk communication strategies that are themselves appropriately responsive to empirically identified risk-perception concerns.
 

Anyone who dismisses the existence or seriousness of unfounded fears of childhood vaccines would be behaving foolishly. Skilled journalists and others have vividly documented enclaves of concerted resistance to universal immunization programs. Experienced practitioners furnish credible reports of higher numbers of parents seeking counsel and assurance of vaccine safety. And valid measures of vaccination coverage and childhood disease outbreaks confirm that the incidence of such outbreaks is higher in the enclaves in which vaccine coverage falls dangerously short of the high rates of vaccination prevailing at the national level (Atwell et al. 2013; Glanz et al. 2013; Omer et al. 2008).

At the same time, only someone insufficiently attuned to the insights and methods of the science of science communication would infer that this threat to public health warrants a large-scale, sweeping “education” or “marketing” campaign aimed at parents generally or at the public at large. The potentially negative consequences of such a campaign would not be limited to the waste of furnishing assurances of safety to large numbers of people who are in no need of it. High-profile, emphatic assurances of safety themselves tend to generate concern (Kahan 2013a; Kasperson et al. 1988). A broad scale and indiscriminant campaign to communicate vaccine safety—particularly if understood to be motivated by a general decline in vaccination rates—could also furnish a cue that cooperation with universal immunizations programs is low, potentially undermining reciprocal motivations to contribute to the public good of herd immunity. Lastly, such a campaign would create an advocacy climate ripe for the introduction of cultural partisanship and recrimination of the sort known to disable citizens’ capacity to recognize valid decision-relevant science generally (Bolsen & Druckman 2013; Kahan 2012), and valid science relevant to vaccines in particular (Gollust, Dempsey, Lantz, Ubel & Fowler 2010; Kahan, Braman, Cohen, Gastil & Slovic 2010).

The right response to dynamics productive of excess concern over risk is empirically informed risk communication strategies tailored to those specific dynamics. Relevant dynamics in this setting include not only those that motivate enclaves of resistance to universal immunization but also those that figure in the concerns of individual parents seeking counsel, as they ought to, from their families’ pediatricians. Risk communication strategies specifically responsive to those dynamics should be formulated (e.g., NCIRS 2013)—and they should be tested, both in the course of their development and in their administration (Shourie et al. 2013), so that those engaged in carrying them out can be confident that they are taking steps that are likely to work and can calibrate their approach as they learn more (Sadaf et al. 2013; Opel et al. 2012).

Again, preliminary research of this sort has commended. Perfection of behavioral-prediction profiles of the sort featured in Opel et al. (2011a, 2011b, 2013b) would not only enable researchers to extend understanding of the sources and consequences of genuine vaccine hesitancy but also to test focused risk-communication strategies on appropriate message recipients.  If made sufficiently precise, screening protocols of this sort would also enable practitioners to accurately identify parents in need of counseling, and public health officials to identify regions where the extent of hesitancy warrants intervention.

The public health establishment should exercise leadership to make health professionals and other concerned individuals and groups appreciate the distinction between targeted strategies of this sort and the ad hoc forms of risk communication that were the focus of this study.  They should help such groups understand in addition that support for the former does not justify either encouragement or tolerance of the latter. 

Refs

Atwell, J.E., et al. Nonmedical Vaccine Exemptions and Pertussis in California, 2010. Pediatrics 132, 624-630 (2013).

Berinsky, A.J. & Druckman, J.N. The Polls—Review: Public Opinion Research and Support for the Iraq War. Public Opin Quart 71, 126-141 (2007).

Bishop, G.F. The Illusion of Public Opinion : Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, Lanham, MD, 2005).

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series (2013). Available at http://www.ipr.northwestern.edu/publications/papers/2013/ipr-wp-13-11.html

Bowles, S. & Gintis, H. A cooperative species : Human reciprocity and its evolution (Princeton University Press, Princeton, 2013).

Brooks, D. The Social Animal : The Hidden Sources of Love, Character, and Achievement (Random House Trade Paperbacks, New York, 2012).

Glanz, J.M., et al. A Population-Based Cohort Study of Undervaccination in 8 Managed Care Organizations across the United States Managed Care Organizations. JAMA pediatrics 167, 274-281 (2013).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy Undermines Support for State Mandates on the Human Papillomavirus Vaccine. Health Affair 29, 2041-2046 (2010).

Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013a).

Kasperson, R.E., et al. The Social Amplification of Risk: A Conceptual Framework. Risk Analysis 8, 177-187 (1988).

Klinger, B., Khwaja, A. & LaMonte, J. Improving credit risk analysis with psychometrics in Peru. (Inter-American Development Bank, 2013). Available a0074

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).

MacDonald, N.E., Smith, J. & Appleton, M. Risk Perception, Risk Management and Safety Assessment: What Can Governments Do to Increase Public Confidence in Their Vaccine System? Biologicals 40, 384-388 (2012).

Mnookin, S. The Panic Virus : A True Story of Medicine, Science, and Fear (Simon & Schuster, New York, 2011).

NCIRS, MMR Decision Aid (2013). Available at http://www.ncirs.edu.au/immunisation/education/mmr-decision/index.php.

Omer, S.B., et al. Geographic Clustering of Nonmedical Exemptions to School Immunization Requirements and Associations with Geographic Clustering of Pertussis. American Journal of Epidemiology 168, 1389-1396 (2008).

Ones, D.S., Dilchert, S., Viswesvaran, C. & Judge, T.A. In support of personality assessment in organizational settings. Personnel Psychology 60, 995-1027 (2007)

Opel, D.J., et al. Characterizing Providers’ Immunization Communication Practices During Health Supervision Visits with Vaccine-Hesitant Parents: A Pilot Study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., et al. Characterizing Providers’ Immunization Communication Practices During Health Supervision Visits with Vaccine-Hesitant Parents: A Pilot Study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., et al. Development of a survey to identify vaccine-hesitant parents: The parent attitudes about childhood vaccines survey. Human Vaccines 7, 419-425 (2011a).

Opel, D.J., et al. The Relationship between Parent Attitudes About Childhood Vaccines Survey Scores and Future Child Immunization Status: A Validation Study. JAMA pediatrics 167, 1065-1071 (2013b).

Opel, D.J., et al. Validity and reliability of a survey to identify vaccine-hesitant parents. Vaccine 29, 6598-6605 (2011b).

Otto, S. Antiscience Beliefs Jeopardize U.S. Democracy. in Scientific American (Oct. 16, 2012a), available at http://www.scientificamerican.com/article.cfm?id=antiscience-beliefs-jeopardize-us-democracy.

Otto, S.L. One Way to Help Science: Become Republican. Nature Medicine 18, 17 (2012b).

Rachlinski, J.J. Comment: Is Evolutionary Analysis of Law Science or Storytelling. Jurimetrics 41, 365-370 (2001).

Sadaf, A., Richards, J.L., Glanz, J., Salmon, D.A. & Omer, S.B. A Systematic Review of Interventions for Reducing Parental Vaccine Refusal and Vaccine Hesitancy. Vaccine 31, 4293-4304 (2013).

Shourie, S., Jackson, C., Cheater, F., Bekker, H., Edlin, R., Tubeuf, S., Harrison, W., McAleese, E., Schweiger, M. & Bleasby, B. A cluster randomised controlled trial of a web based decision aid to support parents’ decisions about their child's Measles Mumps and Rubella (MMR) vaccination. Vaccine 31, 6003-6010 (2013).

Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).

Wilkins, C.H., Roe, C.M., Morris, J.C. & Galvin, J.E. Mild physical impairment predicts future diagnosis of dementia of the Alzheimer’s type. Journal of the American Geriatrics Society 61, 1055-1059 (2013).

Tuesday
Mar042014

A nice empirical study of vaccine risk communication--and an unfortunate, empirically uninformed reaction to it

Pediatrics published (in “advance on-line” form) an important study yesterday on the effect of childhood-vaccine risk communication. 

The study was conducted by a team of researchers including Brendan Nyhan and Jason Reifler, both of whom have done excellent studies on public-health risk communication in the past

NR et al. conducted an experiment in which they showed a large sample of U.S. parents with children age 17 or under communications on the risks and benefits of childhood vaccinations.  

Exposure to the communications, they report, produced one or another perverse effect, including greater concern over vaccine risks and, among a segment of respondents with negative attitudes toward vaccines, a lower self-reported intent to vaccinate any “future child” for MMR (mumps, measles, rubella).

The media/internet reacted with considerable alarm: “Parents Less Likely to Vaccinate Kids After Hearing Government’s Safety Assurance”; “Trying To Convince Parents To Vaccinate Their Kids Just Makes The Problem Worse”; “Pro-vaccination efforts, debunking autism myths may be scaring wary parents from shots”. Etc.

Actually, I think this a serious misinterpretation of NR et al.

The study does furnish reason for concern. 

But what we should be anxious about, the NR et al. experiment shows, is precisely the simplistic, empirically uninformed style of risk communication that many (not all!) of the media reports on the study reflect.

To appreciate the significance of the study, it’s useful to start with the distressing lack of connection between fact, on the one hand, and the sort of representations that media and internet commentators constantly make about the public’s attitude toward childhood immunizations, on the other.

The message of these ad hoc risk communicators consists of a collection of dire (also trite & formulaic) pronouncements: a  “growing crisis of public confidence—an “epidemic of fear,” among a  large and growing number” of “otherwise mainstream parents”—has generated  an “erosion in immunization rates,leading, “predictably to the resurgence of diseases considered vanquished long ago. From Taliban fighters to California soccer moms, those who choose not to vaccinate their children against preventable diseases are causing a public health crisis.”

According to the best available evidence, as collected and interpreted by the nation’s most authoritative public health experts, this story is simply false.

Childhood vaccine rates are not “eroding” in the U.S. 

Coverage for MMR, for pertussis (“whooping cough”), for polio, for hepatitis-b—all have been over 90%, the national public health target, for over a decade.  The percentage of children whose parents refuse to permit them to receive any of the recommended childhood vaccines has remained under 1% during this time.

Every year, with the release of the latest results of the National Immunization Survey, the CDC issues a press release to announce the “reassuring” news that childhood immunization rates either “remain high” or are “increasing.” “ ‘Nearly all parents are choosing to have their children protected against dangerous childhood diseases,’ ” the officials announce.

There’s definitely been a spike in whooping cough cases in recent years. 

But “[p]arents refusing to get their children vaccinated,” according to the CDC, are “not the driving force behind the[se] large scale outbreaks.” In addition to “increased awareness, improved diagnostic tests, better reporting, [and] more circulation of the bacteria,” the CDC has identified “waning immunity “from an ineffective booster shot as one of the principal causes.”

Measles have deemed eliminated in the United States but can be introduced into U.S. communities by individuals infected during travel abroad.  

Fortunately, “[h]igh MMR vaccine coverage in the United States (91% among children aged 19–35 months),” the CDC states, “limits the size of [such] outbreaks.” “[D]uring 2001–2012, the median annual number of measles cases reported in the United States was 60 (range: 37–220).”

The “public health crisis” theme that pervades U.S. media and internet commentary dates to the 1998 publication in the British medical journal Lancet of a bogus and since-retracted study that purported to find a link between the MMR vaccine and autism.

The study initiated a genuine panic, and a demonstrable decline in vaccine rates, in the U.K.

Public health officials were eager to head off the same in the U.S., and advocacy groups and the media were—appropriately!—eager to pitch in to help.

Fortunately, the flap over the bogus study had no effect on U.S. vaccination rates, which have historically been very high, or on the attitudes of the general public, which have always been and remain overwhelmingly positive toward universal immunization.

But through an echo-chamber effect, the “public health crisis” warning bells have continued to clang—all the louder, in fact, over time.

One might think—likely some of those who are continuing to sound this alarm do—that the persistent “red alert” status can’t really do any harm.

But that’s where the public-health risk of not having a coordinated, empirically informed, evidence-based system of risk communication comes in.

It’s a well established finding in the empirical study of public risk perceptions that emphatically reassuring people that a technology poses no serious risk in fact amplifies concern

How other people in their situation are reacting is an important cue that ordinary members of the public rely on to gauge risk.  The message “many people like you” are afraid thus excites apprehension, even if the message is embodied in an admonition that there’s nothing to worry about.

This anxiety-amplification effect doesn’t mean that one shouldn’t try to reassure genuinely worried people when their concerns are in fact not well founded, because in that case the benefits of accurate risk information, if communicated effectively, will hopefully outweigh any marginal increase in apprehension, which is likely to be small if people are already afraid.

But the anxiety-amplification effect of risk reassurance does mean that it is a mistake to misleadingly communicate to unworried people that people in their situation— a  large and growing number” of “otherwise mainstream parents”; “California soccer moms” (etc. etc., blah blah)—are worried when they aren’t!  In that situation, the message “all of you foolish people are needlessly worried—JUST CALM DOWN!” generates real risk of inducing fear without creating any benefit.

The excellent NR et al. study furnishes evidence to be concerned that ad hoc, empirically uninformed vaccine-risk communication could have exactly this effect.

The NR et al. featured a variety of “risk-benefit” communications.  One was  a fairly straightforward report that rebutted the claim that vaccines cause autism. Two others stressed the health benefits of vaccination, one in fairly analytic terms and the others in a vivid narrative in which a parent described the terrifying consequences when her unvaccinated child contracted measles.

The result?

Consistent with the anxiety-amplification effect, subjects who received the vivid narrative communication became more concerned about the side effects of getting the MMR vaccine.

The impact of the blander communication that refuted the MMR-autism link was mixed.

Overall, the subjects in that condition were in fact less likely to agree that vaccines cause autism than parents in a control condition.

They were no less likely than parents in the control to believe that the MMR vaccine has “serious side effects.”  But they weren’t any more likely to believe that either.

The MMR-autism refutation communication did have a perverse effect on one set of subjects, however.

NR et al. measured the study participants’ “vaccine attitudes” with a scale that assessed their agreement or disagreement with items relating to the risks and benefits of vaccines (e.g., “I am concerned about serious adverse effects of vaccines”).  The majority of parents expressed positive attitudes.

But among those who held the most negative attitudes, the self-reported intention to vaccinate any “future child” for MMR was actually lower in the group exposed to the communication that refuted the MMR-autism link than it was among their counterparts in the control condition.

What should we make of this?

I don’t think it would be correct to infer that from the experiment that vaccine-safety “education” will always “backfire” or that trying to “assure” anxious parents will make them “less likely to vaccinate” their children.

In fact, that interpretation would itself be empirically uninformed.

For one thing, NR et al. used “self-report” measures, which are well known not to be valid indicators of vaccination behavior.  Indeed, parents’ responses to survey questions grossly overstate the extent to which their children are not immunized.

Great work is being done to develop a behaviorally validated attitudinal screening instrument for identifying parents who are genuinely likely not to vaccinate their children. 

But that research itself confirms that many, many more parents say “yes” when asked if they are concerned that vaccines might have “serious side effects”—the sort of item featured NR et al. scale—than refrain from vaccinating their children.

What’s more, the NR et al. sample was not genuinely tailored to parents who have children in the age range for the MMR vaccine. 

That first MMR dose is administered at one year of age, and the second before age 4 or 5. 

The NR et al. parents had children “17 or younger.”

The mean age of the study respondents is not reported, but 80% were over 30, and 40% over 40.  So no doubt many were past the stage in life where they’d be making decisions about whether any “future” child should get the MMR vaccine.

What are survey respondents who aren’t genuinely reflecting on whether to vaccinate their children telling us when they say they “won’t”?

This is a question that CCP’s recent Vaccine Risk and Ad Hoc Risk Communication Study helps to answer.

When scales like the one featured in NR et al. are administered to members of the general public, they measure a more generic affective attitude toward vaccination.

The vast majority of the U.S. public has a very positive affective orientation toward vaccines

An experiment like the one NR et al. conducted is instructive on how risk communication might influence that sort of general affective orientation. And what their experiment found is that there’s good reason to be concerned that the dominant, ad hoc empirically uninformed style of risk communication (on display in coverage of their study) can in fact adversely affect that attitude.

That finding is consistent with the ones reported in the CCP study, which found that stories emphasizing the “public health crisis” trope cause people to grossly overestimate the extent to which parents in the U.S. are resisting vaccination of their children.

The CCP study also found that the equation of “vaccine hesitancy” with disbelief in evolution and skepticism about climate change—another popular trope—can create cultural polarization over vaccine safety among diverse people who otherwise all agree that vaccine benefits are high and their risks low.

That finding is closely related, I suspect, to the perverse effect that NR et al. experiment produced in the self-reported “intent to vaccinate” response of the small group of respondents in their sample who had a negative attitude toward vaccines.

The dynamic of motivated reasoning predicts that individuals will “push back” when presented information that challenges an identity-defining belief. 

There aren’t many individuals in U.S. society whose identity includes hostility to universal vaccination—they are an outlier in every recognizable cultural group.

But it’s not surprising that they would express that belief with all the more vehemence when shown information asserting that vaccines are safe and effective and they immediately asked whether they’d vaccinate “future children”

The NR et al. study is superbly well done and very important.

But the lesson it teaches is not that it is “futile” to try to communicate with concerned parents.

It’s that it is a bad idea to flood public discourse in a blunderbuss fashion with communications that state or imply that there is a “growing crisis of confidence” in vaccines that is “eroding” immunization rates.

It’s a good idea instead to use valid empirical means to formulate targeted and effective vaccine-safety communication strategies.

As indicated, there is in fact an effort underway to develop behaviorally validated measures for identifying parents who are most at risk of vaccine hesitancy (who make up a much smaller portion of the already relatively small portion of the population who express a “negative attitude” toward vaccines when responding to public opinion survey measures). With that sort of measure in hand, researchers test counseling strategies (ones informed, of course, by existing research on what works in comparable areas) aimed at precisely at the parents who would benefit from information.

The public health establishment needs to make clear that that sort of research merits continued, and expanded support.

In addition, the public health establishment needs to play a leadership role in creating a shared cultural understanding—among journalists, advocates, and individual health professionals—that risk communication, like all other elements of public-health policy, must be empirically informed.

The NR et al. study furnishes an inspiring glimpse of how much value can be obtained from evidence-based methods of risk communication.

The reaction to the study underscores how much risk we face if we continue to rely on an ad hoc, evidence-free style of risk communication instead.

Wednesday
Feb262014

Geoengineering the science communication environment: the cultural plasticity of climate change risks part II

So … a couple of days ago I posted something on the topic of “geoengineering.”

I'm pretty fascinated by the advent of research and discussion of this new technology, which of course "refers to deliberate, large-scale manipulations of Earth’s environment designed to offset some of the harmful consequences of [greenhouse-gas induced] climate change."

For one thing, geoengineering presents a splendid, awe-inspiring pageant of human ingenuity. 

from 20/20science.orgConsider David Keith’s idea, presented in an article published in the Proceedings of the National Academy of Sciences, of deploying a fleet of thermostatically self-regulating, mirror-coated, nanotechnology flying saucers, which would be programmed to assemble at the latitude and altitude appropriate to reflect back the precise amount of sun light necessary to offset the global heating associated with human-caused CO2 emissions.

The only thing needed to make this the coolest (as it were) technological invention ever would be the addition of a force of synthetic-biology engineered E. Coli pilots, who would be trained to operate the nanotechnology flying saucers while also performing complex mathematical calculations in aid of computationally intensive tasks (such as climate modeling or intricate sports-betting algorithms) back on the surface of the earth!

But another reason I find geoengineering so fascinating is its potential to invert the cultural meanings of climate change risk.

This is what I focused on in my last post

There I rehearsed the account that the “cultural theory of risk” gives for climate change conflict. “Hierarchical individualists” are (unconsciously) motivated to resist evidence of climate change because they perceive that societal acceptance of such evidence would justify restrictions on markets, commerce, and industry—activities they value, symbolically as well as materially.

“Egalitarian communitarians,” by the same logic, readily embrace the most dire climate-change forecasts because they perceive exactly the same thing but take delight at the prospect of radical limits on commerce, industry, and markets, which in their eyes are the source of myriad social inequities.

My point was that, if we accept this basic story (it’s too simple, even as an account of how cultural cognition works; but that’s in the nature of “models” & should give us pause only when the simplification detracts from rather than enhances our ability to predict and manage the dynamics of the phenomenon in question), then there’s no reason to view the valences of the cultural meanings attached to crediting climate change risk as fixed or immutable.  One could imagine a world in which crediting evidence of human-caused climate change and the risks it poses gratify hierarchical and individualistic sensibilities and threaten egalitarian communitarian ones.

Indeed, one could, in theory, make such a world with geoengineering.  Or make it simply by initiating a sufficiently serious and visible national discussion of it as one potential solution to the problems associated with global warming.

As I explained, geoengineering stands the cultural narrative associated with climate change on its head.  Ordinarily, the message of climate change advocacy is “game over!” & “told you so!”: your inquisitive, market-driven forms of manipulation of the environment to suit your selfish desires are killing us and now must end!

The message of the geoengineering, however, is “more of the same!” & “yes, we can!”: we’ve always managed to offset the environmental byproducts of commerce, industry, markets etc. with more commerce and market-fueled ingenuity (see the advent of modern sewage treatment as a means of overcoming "natural limits" on population density in big cities)—well, the time is here to do it again!

By making a culturally affirming meaning available to hierarch individualists, geoengineering reduces the psychic cost for them of engaging open-mindedly with evidence that human-caused climate change puts us in danger.

Of course, by attenuating the identity-affirming meaning that climate change now has for egalitarian communitarians—by suggesting that we needn’t go on a “diet” to counter the effects of our “planetary over-indulgence”; we have the option “atmospheric liposuction” at our disposal!—geoengineering could well expected to provoke a skeptical orientation in egalitarian communitarians, not only toward geoengineering but toward climate change science that implies the necessity and feasibility of conscious interventions to offset the impact of carbon emissions on the environment.

CCP did a study (to be published in Ann. Am. Acad. Pol. & Soc. Sci.) that tested these hypotheses.

In it, we instructed the subjects—nationally representative samples of 1500 US adults and 1500 English ones—to read a study on human-caused climate change.  A composite of real studies appearing  in Nature and Proceedings of the National Academies of Sciences, the study presented evidence that CO2  dissipates from the atmosphere much more sluggishly than scientists had previously anticipated.

As a result, the composite study concludes, phasing in strict CO2 limits (450-600 ppm) will have less beneficial impact than had previously been predicted.  Indeed, even if carbon emissions ended today, there’d still be substantial detrimental impacts—in the form of massive submersion of highly populated coastal regions due to continuing sea-level rise, and famine-inducing droughts in interior regions due to shifting weather patterns.

We then tested our subjects’ evaluation of the validity of the study.  For this purpose, we instructed them to indicate their level of agreement or disagreement with statements such as “the scientists who did the study were biased,” “computer models like those relied on in the study are not a reliable basis for predicting the impact of  CO2 on the climate,” and “more studies must be done before policymakers rely on the findings” of the study etc.

The sorts of arguments that typically are advanced by climate skeptics, these items enabled us to form a “dismissiveness” scale that reflected how closed or open-minded the subjects were in assessing this evidence of climate change.

We found, not surprisingly, that subjects disposed toward hierarchical and individualistic values—in both the U.S. and the English samples—were highly dismissive, while ones disposed toward egalitarian and communitarian values were highly receptive to the evidence presented in the composite study.

But that was in a control condition in which the subjects, before reading the composite study and indicating their views of its validity, read a story about a city meeting on a traffic-light proposal, a matter completely unrelated to climate change.

There were two other experimental conditions.  In the “anti-pollution” condition, subjects read a news story that reported that expert scientists were demanding implementation of stricter carbon emissions to offset the deleterious effects of climate change. In the “geoengineering” condition, in contrast, the subjects read a news story that reported that expert scientists were calling for more research on geoengineering in responses to those same anticipated effects. 

Logically, the information in these news stories is no more related to the validity of the climate-science study that the subjects were subsequently asked to read and evaluate than was the information in the control-condition news story on traffic lights: either the evidence on carbon dissipation is valid or it isn’t; its validity doesn’t depend on what we are going to do if it is—restrict carbon emissions all the more or consider geoengineering; indeed, the evidence it is not valid, that issue is moot.

But psychologically, the cultural cognition thesis predicts that which condition the subjects were assigned to could matter. 

The subjects in the geoengineering condition were seeing climate change connected to cultural meanings—“more of the same” & “yes, we can!”—that are different from the usual “game over!” & “told you so!” ones, which the anti-pollution news story was geared to reinforcing.

Because the congeniality of the cultural meaning of information shapes how readily they engage with the content of it, we predicted that the hierarchical individualists in the geoengineering condition would respond much more open-mindedly to the information from the climate change study on carbon dissipation.

And that prediction turned out to be true. 

In addition, cultural polarization over the validity of the climate-change study was lower for both U.S. and English subjects in the geoengineering condition than in the anti-pollution condition, where polarization was actually larger for U.S. subjects than it was in the control.

But part of the reason that polarization was lower in the geoengineering condition was that egalitarian communitarians who read the geoengineering news story reacted less open-mindedly toward the climate-change study than their counterparts who first read the anti-pollution news story.

The egalitarian communitarians in the anti-pollution conditon saw no tension between-- indeed, likely perceived an affinity between-- the dire conclusions of the study and the  “game over!”/“told you so!” meanings that they attach to climate change.

But the conflict between those meanings and the narrative implicit in the "geoengineering" condition woke the egalitarian communitarians up to the CO2 dissipation study's potential policy implications: if CO2 reductions won't be enough to stave off disaster, then we are going to have to do something more.  

Primed to see that the "more" was geoengineering-- "more of the same!"/"yes, we can!"--many egalitarian communitarian subjects pushed back on the premise, either adopting or rejecting with less vehemence the dismissive responses that climate skeptics typically express toward evidence of human-caused climate change.

In sum, by inverting the cultural meanings attached to such evidence, the geoengineering news story made the hierarchical individualists more inclined to believe and egalitarian communitarians more inclined to be skeptical of climate change. That's a pretty nice corroboration, I think, of the cultural cognition thesis!

I don’t think, however, that this result suggests the advent of geoengineering as subject of research and as an issue for public discussion will be a zero sum game for public engagement with climate science.

First, contrary to the warnings of some commentators, subjects exposed to the geoengineering information did not become less concerned about climate change.  Overall, they became more.

Second, the egalitarian communitarians in the geoengineering condition were less open-minded in their assessment of climate change evidence than those in the anti-pollution condition. But in absolute terms, they were still plenty open-minded—indeed, more open-minded, less dismissive—than hierarchical individualists in that very condition.

Third, the major impediment, I’m convinced, to constructive public engagement with climate science is not how much either side knows or understands scientific evidence of it.  It’s their shared apprehension that opposing positions on climate change are, in effect, badges of membership in and loyalty to competing cultural groups; that is the cue or signal that motivates members of the public to process information about climate change risks in a manner that is more reliably geared to affirming the position that predominates in their group than to converging on the best available evidence.

The key, then, is to clear the science communication environment of the toxin of antagonistic cultural meanings that now envelop the climate change issue.

The advent of public discussion of geoengineering, the CCP study implies, can help to achieve this desirable result by seeding public deliberations over climate change with meanings  congenial to a wider array of cultural styles.

 

Monday
Feb242014

Shockingly sad news . . . 

A model of models for the good life of being a scholar . . . .

Monday
Feb242014

Geoengineering & the cultural plasticity of climate change risk perceptions: Part I

from Thehoustonfreethinkers.com

 Yesterday I posted a small section of a CCP paper, scheduled for publication in the Annals of the American Academy of Political & Social Sciences, that reports the results of a study on how emerging research on and public discussion of geonengeering might affect the science communciation environment surrounding climate change.

I’ve been thinking of geoengineering again recently, mainly because on my trip to Cardiff University I got a chance to discuss public attitudes toward it—existing and anticipated—with Nick Pidgeon, who along with Adam Corner and other members of the Cardiff Understanding Risk Group, has been doing some great studies of this topic.

How the public will perceive geoengineering is fascinating for all kinds of reasons, but the one that I find the most intriguing is geoengineering’s inversion of the usual cultural meanings of climate change risk. 

According to the cultural cognition thesis, we should expect persons who are relatively hierarchical and individualistic to be climate change skeptics: crediting evidence of the dangers posed by human-caused climate change implies that we should be restricting commerce, industry, markets, and other forms of private orderings—activities of extreme value, symbolic as well as material, to people with these outlooks.

By the same token, we should expect persons who are egalitarian and communitarian to be highly receptive to evidence of the danger of climate change: because they already are morally suspicious of commerce, industry, and markets, to which they attribute unjust social disparities (actually, they might like to take a look at the disparities that existed in pre-market societies & figure out which ones were greater, but that’s another matter!), they find it congenial to see those activities as sources of danger that ought to be restricted.

This is the plain vanilla rendering of Douglas & Wildavsky’s “cultural theory of risk” (I don't actually buy it, to tell you the truth!)—and, indeed, Wildavsky, who died in 1993 (at the early age of 63), had already characterized global warming as “the mother of all environmental scares”:

Warming (and warming alone), through its primary antidote of withdrawing carbon from production and consumption, is capable of realizing the environmentalist’s dream of an egalitarian society based on rejection of economic growth in favor of a smaller population’s eating lower on the food chain, consuming a lot less, and sharing a much lower level of resources much more equally. 

But Wildavsky—a mainstream political liberal whose experience with the radical “free speech” movement at Berkeley left him obsessed with the “rise of radical egalitarianism”—puts a spin on climate change that contravenes the fundamental symmetry of the laws of cultural cognition. 

That is, he seems to imply that it’s only “egalitarian collectivists” who will be motivated to assign to evidence of climate change risks a significance biased in favor of their preferred way of life.

But if, as Douglas and Wildavsky so adamantly insisted in Risk and Culture, “[e]ach form of social life has its own typical risk portfolio”—if  all “people select their awareness of certain dangers to conform with a specific way of life,” and thus “each social arrangement elevates some risks to a high peak and depresses others below sight”—then there's no more reason to expect hierarchical individualists to form reliable perceptions of climate change risks than egalitarian communitarians.

Wildavsky would have come closer to conveying the logic of his and Douglas’s own position, then, if he had called global warming the “mother of all environmental risk-perception conflicts.”

If we follow the symmetry of cultural cognition out a bit further, moreover, we can see that there is in fact nothing inherently “egalitarian” in climate-change belief or inherently “individualistic” in climate-change skepticism.  

Dangers are selected for public concern according to the strength and direction of social criticism,” we are told.  But because what effect acknowledging a particular assertion of risk will have on the stock of competing ways of life is determined not by people's "direct examination of physical evidence" but by their understanding of social meanings (those are what determine for them what the "physical evidence" signifies), all we can say is that in the context of some particular society's "dialogue on how best to organize social relations," acceptance of human-caused climate change just happens to be understood as indicting individualism and vindicating egalitarianism.

But that could change, surely!  

The case of geoengineering shows how. 

The argument for investigating its development—one forcefully advanced by both the U.S. National Academy of Sciences and Royal Society—obviously presupposes both that human-caused climate change is happening and that it poses immense threats to human well-being.

But the cultural narrative of geoengineering is quite different from any of the other proposed responses. Whereas carbon-emission restrictions proclaim the inevitable limits of technological and commercial growth, geoengineering (much like nuclear power) asserts the potential limitlessness of the same.

“We are not like the stupid animals,” the geoengineering narrative says, “who reach the pinnacle/mode of the Malthusian curve and then come crashing down.” 

“We use our intelligence to shift the curve—deploying technology, fueled by commerce and markets, to successfully repel the very threats to our well-being that are the byproducts of commerce, markets, and technology! Brilliant!

“It used to be said,” the geoengineering narrative continues, “that the natural population density of a city like, say, London, was  shy of 4,000 persons per mile—because at around that point people would inevitably die in droves from ingesting their own shit (literally!).”   “But we invented modern systems of sewage and water treatment—we used our ingenuity to shift the curve—and now we can have cities (London: 12,000/mile; Sao Paulo 20,000/mi) many many times more dense then that!”

“Well,” the narrative concludes, “the time has come again to shift the curve, to use our ingenuity to handle the byproducts of our own ingenuity, to blast our shit into outerspace so that we don’t choke on it! Let’s go!”

This is inspiring to the individualist.

It is demoralizing to the egalitarian.  The “lesson” of climate change, for him or her, is “game over," not “more of the same”; "we told you so!," not "Yes, we can!"

The answer to our “planetary over-indulgence” is a “diet,” not “atmospheric liposuction”!

And because the cultural narrative is demoralizing to the egalitarian, geoengineering is terrifying

The risks form unforeseen and unforeseeable consequences are too high.  After all, the climate is a classic “chaotic” system—one the sheer complexity of which defies the sort of modeling that would have to be done to intelligently manage any geoengineering “fix.”

It will never ever work, and scientists like those in the NAS and Royal Society are being foolish for even proposing to investigate its risks and benefits. Indeed, it's dangerous even to discuss geoengineering, the mere mention of which threatens to dissipate the surging public demand in the U.S. and other industrialized countries to impose a carbon tax and enact other sorts of restrictions on fossil fuel use.

But what if the best available scientific evidence on climate change—including the inevitability of genuinely catastrophic climate impacts no matter what level of carbon mitigation world governments might agree to (including complete cessation of fossil fuel use tomorrow)—suggests that that nothing short of geoengineering can stave off myriad disasters, including continuing rising sea levels, violent and erratic storm activity in various parts of the world, and famine-inducing droughts over much of the rest?

Who should we expect to be skeptical of that evidence? The egalitarian communitarian or the hierarch individualist?

If in considering such evidence, the two could be observed to be trading places on whether the “scientists were biased,” “computer models could be trusted,” “the call for action is premature” etc., would that not be a nice little proof of the cultural theory of risk?

Tune in "tomorrow" & I’ll show you what the results of such an experiment looks like! 


Sunday
Feb232014

Three models of risk perception -- & their significance for self-government . . .

From Geoengineering and Climate Change Polarization: Testing a Two-channel Model of Science Communication, Ann. Am. Acad. Pol. & Soc. Sci. (in press).

Theoretical background

Three models of risk perception

The scholarly literature on risk perception and communication is dominated by two models. The first is the rational-weigher model, which posits that members of the public, in aggregate and over time, can be expected to process information about risk in a manner that promotes their expected utility (Starr 1969). The second is the irrational-weigher model, which asserts that ordinary members of the pubic lack the ability to reliably advance their expected utility because their assessment of risk information is constrained by cognitive biases and other manifestations of bounded rationality (Kahneman 2003; Sunstein 2005; Marx et al. 2007; Weber 2006).

Neither of these models cogently explains public conflict over climate change—or a host of other putative societal risks, such as nuclear power, the vaccination of teenage girls for HPV, and the removal of restrictions on carrying concealed handguns in public. Such disputes conspicuously feature partisan divisions over facts that admit of scientific investigation. Nothing in the rational-weigher model predicts that people with different values or opposing political commitments will draw radically different inferences from common information. Likewise, nothing in the irrational-weigher model suggests that people who subscribe to one set of values are any more or less bounded in their rationality than those who subscribe to any other, or that cognitive biases will produce systematic divisions of opinion of among such groups.

One explanation for such conflict is the cultural cognition thesis (CCT). CCT says that cultural values are cognitively prior to facts in public risk conflicts: as a result of a complex of interrelated psychological mechanisms, groups of individuals will credit and dismiss evidence of risk in patterns that reflect and reinforce their distinctive understandings of how society should be organized (Kahan, Braman, Cohen, Gastil & Slovic 2010; Jenkins-Smith & Herron 2009). Thus, persons with individualistic values can be expected to be relatively dismissive of environmental and technological risks, which if widely accepted would justify restricting commerce and industry, activities that people with such values hold in high regard. The same goes for individuals with hierarchical values, who see assertions of environmental risk as indictments of social elites. Individuals with egalitarian and communitarian values, in contrast, see commerce and industry as sources of unjust disparity and symbols of noxious self-seeking, and thus readily credit assertions that these activities are hazardous and therefore worthy of regulation (Douglass & Wildavsky 1982). Observational and experimental studies have linked these and comparable sets of outlooks to myriad risk controversies, including the one over climate change (Kahan 2012).

Individuals, on the CCT account, behave not as expected-utility weighers—rational or irrational—but rather as cultural evaluators of risk information (Kahan, Slovic, Braman & Gastil 2006). The beliefs any individual forms on societal risks like climate change—whether right or wrong—do not meaningfully affect his or her personal exposure to those risks. However, precisely because positions on those issues are commonly understood to cohere with allegiance to one or another cultural style, taking a position at odds with the dominant view in his or her cultural group is likely to compromise that individual’s relationship with others on whom that individual depends for emotional and material support. As individuals, citizens are thus likely to do better in their daily lives when they adopt toward putative hazards the stances that express their commitment to values that they share with others, irrespective of the fit between those beliefs and the actuarial magnitudes and probabilities of those risks.

The cultural evaluator model takes issue with the irrational-weigher assumption that popular conflict over risk stems from overreliance on heuristic forms of information processing (Lodge & Taber 2013; Sunstein 2006). Empirical evidence suggests that culturally diverse citizens are indeed reliably guided toward opposing stances by unconscious processing of cues, such as the emotional resonances of arguments and the apparent values of risk communicators (Kahan, Jenkins-Smith & Braman 2011; Jenkins-Smith & Herron 2009; Jenkins-Smith 2001).

But contrary to the picture painted by the irrational-weigher model, ordinary citizens who are equipped and disposed to appraise information in a reflective, analytic manner are not more likely to form beliefs consistent with the best available evidence on risk. Instead they often become even more culturally polarized because of the special capacity they have to search out and interpret evidence in patterns that sustain the convergence between their risk perceptions and their group identities (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).

Two channels of science communication

The rational- and irrational-weigher models of risk perception generate competing prescriptions for science communication. The former posits that individuals can be expected, eventually, to form empirically sound positions so long as they are furnished with sufficient and sufficiently accurate information (e.g., Viscusi 1983; Philipson & Posner 1993). The latter asserts that the attempts to educate the public about risk are at best futile, since the public lacks the knowledge and capacity to comprehend; at worst such efforts are self-defeating, since ordinary individuals are prone to overreact on the basis of fear and other affective influences on judgment. The better strategy is to steer risk policymaking away from democratically accountable actors to politically insulated experts and to “change the subject” when risk issues arise in public debate (Sunstein 2005, p. 125; see also Breyer 1993).

The cultural-evaluator model associated with CCT offers a more nuanced account. It recognizes that when empirical claims about societal risk become suffused with antagonistic cultural meanings, intensified efforts to disseminate sound information are unlikely to generate consensus and can even stimulate conflict.

But those instances are exceptional—indeed, pathological. There are vastly more risk issues—from the hazards of power lines to the side-effects of antibiotics to the tumor-stimulating consequences of cell phones—that avoid becoming broadly entangled with antagonistic cultural meanings. Using the same ability that they reliably employ to seek and follow expert medical treatment when they are ill or expert auto-mechanic service when their car breaks down, the vast majority of ordinary citizens can be counted on in these “normal,” non-pathological cases to discern and conform their beliefs to the best available scientific evidence (Keil 2010).

The cultural-evaluator model therefore counsels a two-channel strategy of science communication. Channel 1 is focused on information content and is informed by the best available understandings of how to convey empirically sound evidence, the basis and significance of which are readily accessible to ordinary citizens (e.g., Gigerenzer 2000; Spiegelhalter, Pearson & Short 2011). Channel 2 focuses on cultural meanings: the myriad cues—from group affinities and antipathies to positive and negative affective resonances to congenial or hostile narrative structures—that individuals unconsciously rely on to determine whether a particular stance toward a putative risk is consistent with their defining commitments. To be effective, science communication must successfully negotiate both channels. That is, in addition to furnishing individuals with valid and pertinent information about how the world works, it must avail itself of the cues necessary to assure individuals that assenting to that information will not estrange them from their communities (Kahan, Slovic, Braman & Gastil 2006; Nisbet 2009).

References 

Breyer, S.G. Breaking the Vicious Circle: Toward Effective Risk Regulation, (Harvard University Press, Cambridge, Mass., 1993).

Gigerenzer, G. Adaptive thinking: rationality in the real world, (Oxford University Press, New York, (2000).

Jenkins-Smith, H. Modeling stigma: an empirical analysis of nuclear waste images of Nevada. in Risk, media, and stigma : Understanding public challenges to modern science and technology (ed. J. Flynn, P. Slovic & H. Kunreuther) 107-132 (Earthscan, London ; Sterling, VA, 2001). 

Jenkins-Smith, H.C. & Herron, K.G. Rock and a Hard Place: Public Willingness to Trade Civil Rights and Liberties for Greater Security. Politics & Policy 37, 1095-1129 (2009).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan, D.M. Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4(2), 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahneman, D. Maps of bounded rationality: Psychology for behavioral economics. Am Econ Rev 93, 1449-1475 (2003).

Keil, F.C. The feasibility of folk science. Cognitive science 34, 826-862 (2010).

Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).

Marx, S.M., Weber, E.U., Orlove, B.S., Leiserowitz, A., Krantz, D.H., Roncoli, C. & Phillips, J. Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ Chang 17, 47-58 (2007).

Nisbet, M.C. Communicating Climate Change: Why Frames Matter for Public Engagement. Environment 51, 12-23 (2009).

Philipson, T.J. & Posner, R.A. Private choices and public health, (Harvard University Press, Cambridge, Mass., 1993).

Spiegelhalter, D., Pearson, M. & Short, I. Visualizing Uncertainty About the Future. Science 333, 1393-1400 (2011).

Starr, C. Social Benefit Versus Technological Risk. Science 165, 1232-1238 (1969).

Sunstein, C.R. Laws of fear: beyond the precautionary principle, (Cambridge University Press, Cambridge, UK ; New York, 2005).

Sunstein, C.R. Misfearing: A reply. Harvard Law Review 119, 1110-1125 (2006).

Viscusi, W.K. Risk by choice: regulating health and safety in the workplace, (Harvard University Press, Cambridge, Mass., 1983).

 

Thursday
Feb202014

Democracy and the science communication environment (lecture synopsis and slides)

Gave a talk earlier in the week at Cardiff University, the last stop on my fun "cross-cultural cultural cognition road trip." Cardiff's Understanding Risk Research Group features a '27-Yankees equivalent lineup of risk perception scholars--including Nick Pidgeon, Wouter Poortinga, Adam Corner & Lorraine Whitmarsh (I decided not to use that metaphor during my talk)--who are surrounded by top-notch sociologists studying technology and society. They also have a high-charged group of science communication scholars. I had an amaizing few days there & felt very sad when the time came to leave!
 
Slides from my talk are here. I can't quite remember how I put things, but it was something like this . . . .

0. What is this “science of science communication”?  The science of science communication can be understood as a remedy for two fallacies.

The first is res ipsa locquitur (“the thing speaks for itself”): the validity of valid science is manifest, making scientific study of it neither interesting nor necessary.

The second is ab uno disce omnes (“from one, learn all”): the scientific knowledge necessary to enable a doctor to meaningfully advise a patient on a complicated treatment decision is the same as the knowledge necessary to enable a science journalist to edify a curious member of the public, an empirical researcher to advise a policymaker, an educator to teach a high school student the theory of evolution, etc.

My remarks are mainly directed at the ab uno fallacy. I want to describe the distinctive species of SSC that is most likely to evade comprehension if one makes the mistake of thinking it’s only one thing. It is also the one that is arguably most important for the well-being of democratic society. 

The aim of this species of SSC is to protect the science communication environment.

1. The puzzle of cultural polarization over risk

Members of the public in the U.S. are highly divided on all manner of fact relating to climate change. So are members of the public in many other nations, including the UK.

There are other risks—from GM foods to nuclear power to gun ownership to vaccination against infection by HPV or other contagious diseases—that fracture the members of some of these socieites but not others.

Not to be struck by the puzzling nature of this phenomenon is to admit a deficit in curiosity. It’s not surprising at all that people with different values would disagree about what to do about a societal risk like climate change or gun possession. But there’s nothing in how much one weights equality relative to wealth, or security relative to liberty, that determines whether the earth is heating up as a result of human activity or whether permitting citizens to carry concealed handguns in public deters violent assaults.

It’s not surprising either that ordinary members of the public would disagree with one another on facts the nature of which turns on evidence as technically complex as that surrounding climate change, nuclear power, or gun control. 

But if complexity were the source of the problem, we’d expect disagreement to be randomly distributed with respect to cultural and political values, and to abate as individuals become progressively more comprehending of science. 

Not so: on the contrary, the most science comprehending members of the public are the most culturally polarized! (At least in the U.S.; I’m not aware of resarch of this sort with non-US samples & would be grateful to anyone who fills in this gap in my knowledge, if it is one).

What’s the explanation for such a peculiar distribution of beliefs—and on facts that not only admit of investigation by empirical means but that have in fact been investigated by expert empirical methods?

2. The cultural cognition thesis

The answer (or certainly a very large part of it) is cultural cognition.

Cultural cognition is a species of motivated reasoning, which refers to the tendency of people to conform their assessment of all manner of information (empirical data, logical arguments, brute sense impressions) to some goal or interest independent of forming a correct judgment. 

The cultural cognition thesis holds that people can be expected to conform their perceptions of risk and like facts to the stake they have in maintaining their connection to and status within important affinity groups.

The nature of these commitments can be measured by various means, including right-left political outlooks, but in our research we ordinarily do so with scales patterned on the “worldview” dimensions associated with Mary Douglas’s “group-grid” framework.

3. Some evidence

Studies conducted by myself and my collaborators have generated various forms of evidence in support of the cultural cognition thesis—and against rival theories that are often used to explain political conflict over societal risks.

a. Cultural cognition of scientific consensus. In one study, we performed an experiment that showed how cultural cognition influenced formation of public perceptions of what expert scientists believe. The results showed that how readily individuals of diverse cultural outlooks identified a scientist as an “expert” on climate change, nuclear power, or gun control depended on whether that scientist was depicted as espousing a position consistent with the one that prevails in the individuals’ cultural groups.

If individuals selectively credit and dismiss evidence of “expert” opinion in this fashion, they will become culturally polarized over what scientific consensus is in disputed issues.  And, indeed, the study found that in all cases the vast majority of subjects perceived that “scientific consensus” on the relevant issue—climate change, nuclear power, and gun possession—was consistent with the one that prevailed in their cultural group.

The study findings were not only consistent with the cultural cognition thesis, but also inconsistent with two alternatives.  One of these attributes political conflict over societal risks to one or another group’s hostility to science. In fact, no group subscribed to a position that it perceived to be contrary to prevailing scientific opinion.

The second alternative explanation sees one or another group as more attuned to scientific consensus than its rivals. But in fact, all groups were equally likely to view as the “consensus” among expert scientists the position contrary to the one endorsed as the “consensus” position by the U.S. National Academy of Science.

b . “Feeling” the heat—and the hurricanes, floods, tornados etc.  A common theme—indeed, the dominant for commentators who derive their explanations from syntheses of general literature rather than by original empirical research—attributes popular conflict over climate change to the public’s overreliance on heuristic, “system 1” as opposed to more reflective, dispassionate “system 2” information processing.

Those who advance this thesis typically predict that individuals will begin to revise upward their perception of the seriousness of climate change risks as they experience climate-change impacts first hand.  “Feeling” climate change, it is argued, will create the emotionally vivid impression that those who form their risk perceptions heuristically will require to start taking climate change seriously.

This prediction is also contrary to the evidence. 

It’s true that individuals’ perceptions of climate-change risk correlate with their perception that temperatures in their area have been increasing in recent years. But their perception of recent local temperatures are not predicted by what those temperatures have actually been.

Rather, they are predicted by their cultural outlooks, suggesting that individuals selectively attend to or recall weather extremes in patterns that reflect their groups’ position on climate change.

Nor do individuals appear to uniformly revise their perception of climate-change risks as they experience significant extreme-weather hardships. A CCP study of residents of southeast Florida found that the number of times a person had been forced to evacuate his or her residence, had been deprived of access to drinking water, had suffered property damage, etc. as a result of extreme weather or flooding had a very modest positive impact on the perceived risk of climate change for egalitarian communitarians—the individuals most culturally predisposed to credit evidence of climate change—but none on hierarchical individualists—those most culturally predisposed to dismiss such evidence.

In other words, people don’t “believe” in climate change when they “see” it; they see it only when they already believe it.

Cultural cognition predicts this—although so does elementary logic, since individuals who experience such events can’t “see” or “feel” the cause of them. What they see extreme weather as evidence of (climate change, tolerance of gay marriage, nothing in particular, etc.) necessarily depends on their assent to some account of how the world works that they are not themselves in a position to verify. And that’s where cultural cognition comes in.

c. Motivated system 2 reasoning. The popular “thinking fast, thinking slow” account of climate-change controversy also implies that the members of the public most disposed to use reflective “system 2” reasoning can be expected to form perceptions of climate risk more in line with scientific consensus. 

Again, the evidence does not bear this claim out.   In fact, they are the ones who are the most polarized.

That’s what the cultural cognition thesis tells us to expect.  Those who possess the skills and habits of mind necessary to critically evaluate complex arguments and data have more tools at their disposal to fit their assessments of evidence to the beliefs that are predominant in their identity-defining groups.

4. A polluted science communication environment

The spectacle of intense, persistent political conflict can easily distract us from the state of public opinion on the vast run of facts addressed by decision-relevant science. The number of risk issues that divide members of the public along cultural lines is infinitesimal in relation to the number that don’t but could.  There’s no meaningful level of political contestation over the health risks of unpasteurized milk, medical x-rays, high-power transmission lines, fluoridated water, etc. On these issues, moreover, culturally diverse individuals do tend to converge on the best-available evidence as their capacity for science comprehension increases.

The reason that these issues do not provoke controversy, moreover, is not that individuals understand the scientific evidence on the relevant risks more completely than they understand the evidence on climate change or nuclear power or the HPV vaccine or gun control.

Individuals (including scientists) align themselves appropriately with a body of decision-relevant science much vaster than they could be expected to comprehend or verify for themselves. They achieve this feat by the exercise of a reliable faculty for recognizing insights that originate in the methods that science uses to discern the truth.

Their everyday interactions with others who share their cultural worldviews are the natural domain for the use of this faculty.  Individuals spend most of their time with others who share their values; they can exchange information with them readily, without the friction that might attend interactions with individuals whose fundamental outlooks on life differ fundamentally from their own; and they are more able to read those with whom they share defining commitments, and thus to distinguish those of their number who know what they are talking about from those who don’t.

All the various affinity groups within which individuals exercise their knowledge-recognition faculties are amply stocked with people high in science comprehension, and all fully equipped with high-functioning processes for transmitting what their members collectively know of what’s become collectively known through science. So while admittedly (even regrettably) insular, the ordinary interaction of ordinary individuals with those who share their cultural worldviews generally succeeds in aligning individuals’ beliefs with the best available evidence relevant to the decisions they must make in their personal and collective lives.

This process breaks down only in the rare situation when positions on particular issues become entangled in antagonistic cultural meanings, effectively transforming them into badges of membership in and loyalty to one or another competing group. At that point, the stake that ordinary individuals have in forming and persisting in beliefs consistent with others in their group will dominate the stake they have in forming beliefs that reflect what’s known to science: what she personally believes—right or wrong—about climate change, nuclear power, and other societal risks won’t have any impact on the level of risk she or anyone else faces; the formation of a belief at odds with the one that predominates in her group, however, threatens to estrange her from those on whom her welfare—material and psychic—depends.

These antagonistic cultural meanings are a form of pollution in the science communication environment.  They literally disable the ordinarily reliable faculty ordinary individuals rely on to discern what’s known by science.

Engaging information in a manner that reflects their individual interest in forming and persisting in group-convergent beliefs, diverse citizens are less likely to converge on the best available evidence relevant to the health and well-being of them all.

The factual presuppositions of policy choices having become symbols of opposing visions of the best life, debates over risk regulation become the occasion for illiberal forms of status competition between competing cultural groups.

This polluted science communication environment is toxic for liberal democracy.

 5. The science of #scicomm environment protection

The entanglement of positions on societal risk in culturally antagonistic meanings is not a consequence of immutable natural laws or historical processes.  Specific, identifiable events—ones originating in accident and misadventure as often as strategic behavior—steer putative risk sources down this toxic path. 

By empirically investigating why a putative risk source (e.g., mad cow disease or GM foods) took this route in one nation but not another, or why two comparable risk sources (the HPV vaccine and the HBV vaccine) travelled different paths in a single nation (the U.S.), the science of science communication enables us to understand the influences that transform policy-relevant facts into divisive markers of group identity.

The same methods, moreover, can be used to control such influences.  They can be used to forecast the likely development of them in time to enable actors in government and civil society alike can act to avoid their occurrence. They can also be used to formulate and test strategies for disentangling positions from antagonistic meanings where such preventive measures fail.

The vulnerability of risk regulation to cultural contestation is not a consequence of one or another groups’ hostility to science, of citizens’ “bounded rationality,” or of some inherent drive or appetite on the part of competing groups to impose a sectarian orthodoxy on society.

It is the predictable but manageable outgrowth of the same conditions of political liberty and social pluralism that make liberal democracy distinctively congenial to the advance of scientific knowledge.

By using the hallmark methods of science to protect the science communication environment, we can assure our enjoyment of the unprecedented knowledge and freedom that are the hallmarks of liberal democracy.

 

Saturday
Feb152014

Don't be a science miscommunicator's dope (or dodo)

I've blogged about how the NRA uses the expressive "rope-a-dope" tactic to lure gun-control proponents into a style of advocacy that intenstifies cultural antagonism and thus deepens public resistance to engaging sound empirical evidence.

But the same tactic is used--the same trap laid--by enemies of constructive public engagement with decision-relevant science in other areas. Randy Olson's Flock of Dodos is a brilliant, and brilliantly entertaining, demonstration of the dynamic at work in the evolution debate.

The CCP Vaccine Risk Perception and Ad Hoc Risk Communication report warns risk communicators to avoid the "rope-a-dope" trap when engaging propagators of vaccine misinformation:

4. Risk communicators and advocates should be wary of the expressive “rope-a-dope” trap.

Cultural contestation over risks or other facts that admit of scientific inquiry is inherently disruptive of the processes by which ordinary citizens come to know what is known to science (Bolsen & Druckman 2013; Kahan 2013a). When positions become conspicuously identified with membership in identity-defining affinity groups, diverse individuals will not only be exposed disproportionately to information that reflects the position that predominates in their groups. They will also experience psychic pressures that motivate them to use their critical reasoning dispositions to persist in those positions in the face of contrary evidence (Kahan, Peters et al. 2013). For this reason, polarization will be even more intense among members of these groups whose science comprehension capacities are greatest (Kahan 2013b; Kahan, Peters et al. 2012). Because these individuals understandably play a key role in certifying what is known to science within their groups, their divisions will even more deeply entrench other group members’ commitment to the position that predominates among their peers.

Groups intent on promoting cultural polarization can actually use this dynamic to their advantage. By engaging in provocative, culturally partisan and indeed often purely symbolic attacks on positions they disagree with, interest groups can provoke their opponents into denouncing them and their positions in terms that are similarly partisan, recriminatory, and contemptuous. The spectacle of dramatic conflict is what transmits to ordinary citizens—most of whom are largely uninterested in politics and lacking strong partisan sensibilities (Zaller 1992)—that the issue in question is one on which competing positions are badges of group membership and loyalty. That signal benefits the sponsors of group conflict. Indeed, the influence that open conflict exerts on members of the opposing groups will be much stronger than any influence the sponsors of such conflict could have generated by acting alone, not to mention much stronger than the content of the arguments that either side is making.

Vaccine-risk communicators should be wary of this trap, which has been used effectively against advocates of climate science (Pielke 2013) and gun control (Kahan 2013c). Responding to misinformation necessarily elevates the profile of the misinformers. It also creates a deliberative atmosphere in which culturally partisan advocates (some out of innocent exuberance, but others out of a motivation to assimilate vaccine-risk communication into a broader portfolio of publicly arousing issues) will predictably resort to divisive attacks, ones akin, say, to those that inform the “anti-science” trope.

Conspicuous instances of conflict among groups whose members are associated with competing styles and who resort to culturally assaultive idioms are what generates in the minds of ordinary members of the public the impression that disputed positions are aligned with membership in competing groups. It was likely because so many parents of diverse outlooks learned of the HPV vaccine from exchanges like these—as opposed to exchanges with pediatricians or other health experts—that that that vaccine triggered a volume of controversy experienced by no other universal childhood or adolescent vaccine, including the HBV vaccine, which also protects people from a sexually transmitted disease and which is widely included in the schedule of vaccinations required for school enrollment in the vast majority of states (Kahan 2013a).

Steering childhood vaccines clear of the risk of this disorienting form of conflict certainly does not mean that misinformation should routinely be ignored. But it does mean that risk communicators should make a careful assessment of the need to respond, and where there is such a need how to present corrective information in a manner that is free of resonances that convey cultural partisanship.

References

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series (2013). 

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013a).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013b).

Kahan, D.M. The NRA’s "Expressive-Rope-a-Dope-Trick". in Cultural Cognition Project Blog (Sept. 3, 2013c).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013)

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Friday
Feb142014

Culture, rationality, and the tragedy of the science communications commons (lecture synopsis and slides)

Enjoyed the privilege and pleasure of delivering a lecture at the vibrant,  bustling University of Nottingham last night. The culture that I and the audience members—students and faculty from the university and curious, critical-thinking members of the larger community—share creates an affinity between us that makes us more like one another than either of us is like most of the members of our respective societies. But of course the U.S. and U.K. both enjoy public cultures that enable those who see pursuit of knowledge and exchange of ideas as the best life--a truly peculiar notion in the eyes of the vast majority--to live it. Are we not morally obliged to reciprocate this benefit? 

I wish I had spoken for less time so that I could have engaged my friends in discussion for longer.  But slides here, and a reconstruction of my fuzzy recollection of what I said below.  

0. The science communication problem.  The science communication problem refers to the failure of valid, compelling, and accessible scientific evidence to dispel public conflict over risks and other policy-relevant facts to which that evidence applies. The climate change controversy is the most conspicuous instance of this phenomenon but is not the only one: historically nuclear power and chemical pesticides generated conflicts between expert and public understandings of risk; today disputes of GM foods in Europe and the HPV vaccine in the U.S. feature forms and levels of political controversy over facts that admit of empirical investigation as well. 

Of course, no one should find it surprising that risk regulation and like forms of science-informed policymaking are politically contentious. Facts do not determine what to do; that depends on judgments of value, which naturally, appropriately vary among reasoning people in a free society. 

But values don’t determine facts either.  The answer to the question whether the earth’s temperature has increased in recent decades as a result of human activity turns on empirical evidence the proper understanding of which is the same whether one is an “individualist” or an “egalitarian,” a “liberal” or a “conservative,” a  “Republican” or a “Democrat.”

Accordingly, whatever position one thinks the best evidence supports, one should be puzzled by the science communication problem.  Indeed, one should be puzzled even if one thinks the best available evidence doesn’t clearly support any particular position: there’s no reason why people of diverse values should be unable to recognize that, much less for them to form positions in such circumstances that so strongly correlate with their views about the best way to live. 

So what explains the science communication problem? And what, if anything, can be done about it?

I will describe evidence relating to two hypothesized explanations for the science communication problem, and then advance a set of normative and prescriptive claims based on what I think (for the time being, of course) is the account that the evidence most compellingly supports.

1. 2 hypotheses & some evidence.  The dominant account of the science communication problem among both the academic and the popular commentators (including the many popular commentators who pose as scholarly ones) is the “public irrationality thesis” (PIT).  PIT is related to the often-derided “knowledge deficit” theory—a position I’m not actually sure any serious scholar has ever advanced—but in fact puts more emphasis on the public’s capacity to give proper effect to scientific evidence of risk. Building on Kahneman’s popularization of the “system 1/system 2” conception of dual-process reasoning, PIT attributes public controversy over climate change and other societal risks to the public’s excessive reliance on unconscious, affect-driven heuristics (“system 1”) and its inability to engage in the conscious, effortful, analytic analysis (“system 2”) form that characterizes expert risk analysis.

If PIT proponents were trying to connect their understandeing to the evolving empirical evidence on public risk perceptions, they’d surely be qualifying their incessant, repitious, formulaic espousal of it. Those members of the public who display the greatest degree of “system 2” reasoning ability—are no more likely to hold views consistent with scientific consensus. Indeed, they are even more likely to be culturally and ideologically polarized than members of the public who are most disposed to use “system 1” heuristic forms of reasoning.

A second explanation for the science communication problem is the “cultural cognition thesis” (CCT).  CCT posits that the stake individuals have in their status in affinity groups whose members share basic understandings of the best life can be expected to interact with the various psychological processes by which they make sense of evidence of risk.  Supporting evidence includes studies showing that individuals much more readily perceive scientists to be “experts” worthy of deference on disputed societal risks when those scientists support than when they oppose the position that is predominant in individuals’ cultural group.

This selectivity can be expected to generate diverging perceptions of what expert consensus is on disputed risks.  And, indeed, empirical evidence confirms this prediction.  No cultural group believes that the position that is dominant in its group is contrary to scientific consensus—and across the run of disputed societal risks, all of the groups can be shown to be poorly informed on the state of expert opinion.

The magnification of polarization associated with the disposition to engage in “system 2” forms of information processing also fits CCT.  Individuals who are adept at engaging empirical evidence have a resource that those who must rely more on “system 1” substitutes lack for ferreting out evidence that supports their group’s position and rationalizing away the evidence that doesn’t.

2. The tragedy of the science communications commons. PIT, then, has matters essentially upside down. The source of the science communication problem is not too little rationality on the part of the public but rather too much.  The behavior of an ordinary individual as a consumer, a voter, or an advocate, etc., can have no material impact on the level of risk that person or anyone else faces from climate change. But if he or she forms a position on that issue that is out of keeping with the one that predominates in that person's group, he or she faces a considerable risk of estrangement from communities vital to his or her psychic and material well-being.  Under these conditions, a rational actor can be expected to attend to information in a manner that is geared more reliably to forming group-congruent than science-congruent risk perceptions.  And those who are highest in critical reasoning dispositions will do an even better job than those whose “bounded rationality” leave them unable to recognize the evidence that supports their groups’ position or to resist the evidence that  undermines it.

But as individually rational as this form of information processing is, it is collectively irrational for everyone to engage in it simultaneously. For in that case, the members of a self-governing society are less likely to converge or converge as quickly as they otherwise would on the best available evidence.

Yet even that won’t make it any more rational for an individual to attend to information in a manner reliably geared to forming science- as opposed to group-congruent beliefs—because, again, nothing he or she does based on a “correct” understanding will make any difference anyway.

This misalignment of individual and collective interests in the formation of risk perceptions consistent with the best available evidence is the tragedy of the science communications commons.

3. A polluted science communication environment. The signature attributes of the science communication problem—the correlation between perceptions of risk and group-defining values, and the magnification of this effect by greater reasoning proficiency—is pathological.  It is not only harmful, but unusual.  The number of societal risks that reflect this pattern relative to the number that do not is tiny.

In the cases in which diverse members of the public converge on the best available evidence, the reason is not that they genuinely comprehend that evidence. Individuals must, not only to live well but simply to live, accept as known by science much more than they could ever make sense of, much less verify, on their own. 

Ordinary individuals manage to align themselves appropriately with decision-relevant science essential to their individual and collective well-being not by becoming experts in substantive areas of knowledge but by becoming experts in identifying who knows what about what.  Nullius in verba—or “take no one’s word for it,” the motto of the Royal Society—is charming but silly if taken literally.  What’s essential is to take the word only of those whose knowledge has been attained by the methods of ascertaining knowledge distinctive of science.

The remarkable ability that ordinary members of the public—ones of diverse reasoning dispositions as well as diverse values—to reliably identify who knows what about what breaks down, however, when positions on issues become entangled in meanings that transform them into symbols of group identity and loyalty.  At that point, the stake individuals have in forming group-congruent beliefs will dominate the stake they have in forming science-congruent ones.

Such meanings, then, are a kind of pollution in the science communication environment. They disable the normally reliable faculties that individuals use to ascertain what is known to science.

4. “. . . a new political science . . .” (a) Risks are not born with antagonistic cultural meanings but rather acquire them through one or another set of events that might well have turned out otherwise.

It wasn’t inevitable, for example, that the HPV vaccine would acquire the divisive association with contested norms on gender, sexuality, and parental autonomy that polarized opposing groups’ perceptions of its risks and benefits in the U.S. The HBV vaccine also confers immunity from a sexually transmitted disease that causes cancer (hepatitis-b), and the CDC’s recommendation to add it to the schedule of vaccinations required as a condition of middle school enrollment generated no meaningful controversy among culturally diverse citizens—over 90% of whose children received the shot every year during which the states were embroiled in controversy over making the HPV shot mandatory.

The antagonistic cultural meanings that fuel political controversy over GM foods in Europe aren’t inevitable either.  They are completely absent in the U.S.

(b) The same methods that scholars of public risk perception use to make sense of these differences, moreover, can be used to forecast the conditions that make one or another emerging technology—such as synthetic biology or nanotechnology—vulnerable to becoming suffused with such meanings. Action can then be taken to steer these technologies down a safer path—not for the purpose of making members of the public believe they are or aren’t genuinely hazardous, but rather for the purpose of assuring that members of the public will reliably recognize the best available evidence on exactly that.

Indeed,  the danger of cultural polarization associated with the path the HPV vaccine traveled in being introduced to the public was forecast with such methods, which corroborated the warnings of numerous health professionals and others.

This evidence wasn’t rejected; it simply wasn’t considered. There’s was no mechanism in any part of the drug-regulatory approval process for anyone to present, or any institution to act, on evidence on the hazards associated with fast-track approval of a girls-only STD vaccine combined with a high-profile nationwide campaign in state legislatures to make the vaccine mandatory.

(c)  Without systematic procedures to acquire and intelligently use scientific knowledge to protect the science communication environment, its contamination is inevitable.

The inevitable danger of such conflicts is built into the constitution of the Liberal Republic of Science. The same institutions and culture of political freedom that fuel the engine of competitive conjecture and refutation that drives science assure—mandate—that there by no single institution endowed with the authority to certify what is known to science. But the immensity and complexity of what is known cannot certify or announce itself; the idea that it can is the sentimental, sociologically and epistemologically naïve variant of nullius in verba.

In the Open Society there will be a plurality of certifiers—in the form of communities of free individuals associating with others with whom they have converged in the exercise of their reason on a shared understanding of the best way to live. 

This dynamic, unregulated, pluralistic system of certification of what is known to science works in the vast run of cases!

Yet it is inevitable—statistically!--that it sometimes won’t: the sheer enormity of things that science can discern in a free society & the non-zero probability that any one of those can become entangled in antagonistic cultural meanings mean that risk regulation will remain a permanent site of illiberal forms of status competition among the plurality of cultural groups in which free, reasoning individuals form their understanding of what is known to science. This is Popper’s revenge . . . .

It is foolish (an embarrassing display of shallow thinking combined with indulgence of tribal chauvinism) to blame “profit-mongering corporations” or “political extremists” for disasters like the one that occurred with the introduction of the HPV vaccine in the U.S. ”  Until we—the citizens of the Liberal Republic of Science—use our reason and exercise our will to create a common culture of evidence-based science communication dedicated to protecting the science communication environment, we are destined to suffer the reason-effacing, welfare-enervating, freedom-annihilating spectacle of cultural conflict over risk.

(d) Writing at the birth of liberal democracy, Tocqueville famously remarked the need for “a new political science for a world itself quite new.”

Today we need a new political science—a science of science communication –dedicated to protecting the process by which plural communities of free and reasoning individuals certify to themselves what is known by science.

We must use our reason to protect the historic condition of freedom and the unprecedented immensity of collective knowledge that are the reciprocal defining features of the Liberal Republic of Science. 

Monday
Feb102014

"Motivated Numeracy": What's the Point? (lecture synopsis, slides)

Gave lecture /workshop today at Cambridge. It was advertised as being a session on the CCP working paper, “Motivated Numeracy and Enlightened Self-Government.”  It was—but I added some context/motivation.  Outline of what I remember saying below & slides here.  Lots of great questions & comments after—on issues from the influence of cultural cognition on scientists to the relative potential impact of fear & curiosity in fortifying critical reasoning dispositions!

I. What’s the point? The “Motivated Numeracy” study is the latest (more or less) installment in a series intended to make sense of and maybe help solve the science communication problem. The “science communication problem” refers to the failure of valid, compelling, and widely accessible scientific evidence to dispel public controversy over risks and other policy-relevant facts. Climate change is a salient instance of the problem but is not the only one. The conflict between public and expert views on the safety of nuclear power once attracted nearly as much attention. There are other contemporary instances of the science communication problem, too, including the controversy over mandatory HPV vaccination in the US and GM foods in Europe (but actually not in the US).

II.  Two theories. What accounts for the science communication problem?  One explanation, the “public irrationality thesis,” attributes public controversy over climate change and other societal risks to the public’s limited capacity to comprehend science. The problem is only part one of a “knowledge deficit”; more important is a deficit in critical reasoning. Members of the public rely excessively on largely unconscious, heuristic-driven forms of information processing and thus overestimate more emotionally compelling dangers—such as terrorism—relative to less evocative ones like climate change, which the conscious, analytic modes of risk analysis used by experts show are even more consequential.  Informed by Kahneman’s “system 1/system2” conception of dual process reasoning, PIT is more or less the dominant account in popular and academic commentary.

Another account of the science communication problem is the “cultural cognition thesis.” Cultural cognition involves the tendency of individuals to conform their perceptions of risk and other policy-relevant facts to the positions that are dominant in the affinity groups that play a central role in organizing their day-to-day lives.  As a species of motivated reasoning, CCT is distinguished by its use of Mary Douglas’s “cultural worldview” framework to specify the core commitments of the affinity groups that shape information processing.  CCT is distinguished from  other conceptions of the “cultural theory of risk” by its attempt to root the influence that group commitments of this sort play in shaping perceptions of risk in cognitive mechanisms that admit of empirical investigation by the methods featured in social psychology and related disciplines.

III.   Three studies. Motivated Numeracy describes the third in a series of studies dedicated to investigating the relationship between PIT and CCT.  The first study, an observational one that examined the climate-change risk perceptions of a large nationally representative sample, made two findings at odds with PIT. 

The first finding had to do with the impact of science comprehension on the perceived risk of climate change. If, as PIT asserts, the reason that the average member of the public is less concerned with climate change risks than he or she should be is that he or she lacks the capacity to make sense of scientific evidence, than one would expect people to become more concerned about climate change as their science literacy and quantitative reasoning abilities increase.  But this isn’t so: the study found that the impact of these attributes on climate change risk was close to zero for the sample as a whole.

The second finding contrary to PIT had to do with the relationship between science comprehension and cultural cognition.  PIT views cultural cognition as just another heuristic substitute for the capacity to understand and give proper effect to scientific evidence of risk: those who can are reliably guided by the best available evidence; those who can’t must with their gut, which is filled with crap like “what do people like me believe?”  If this position is correct, one would expect the risk perceptions of culturally diverse individuals to be progressively less correlated with their groups and more correlated across groups as their science comprehension capacity increases.

But not so.  On the contrary, cultural polarization, the first study found, increases as science comprehension does.

Why? The CCT explanation is that individuals are using their knowledge of and capacity to reason about scientific evidence to form and persist in beliefs that reflect their group identities.

The second study used experimental methods to test this hypothesis.  The study found, consistent with CCT, that individuals who display the strongest disposition for cognitive reflection—a habit of mind associated with conscious, effortful system 2 reasoning—are more likely to discern the ideological implications of conceptually complicated information and selectively credit or reject it depending on its congeniality to their cultural outlooks.

The third and final study—the one the results of which are reported in “Motivated Numeracy”—likewise used an experimental design to assess whether individuals can be expected to use their critical reasoning dispositions in a manner that promotes identity-congruent rather than truth-congruent beliefs.  The study compared the interaction of right-left ideology (an alternative way to measure the group affinities that generate cultural cognition) with numeracy, a quantitative reasoning capacity associated with “system 2” information processing. 

Subjects were instructed to examine a problem understood to be a predictor of their vulnerability to a defective heuristic alternative to the assessment of covariance.  The problem involved assessing whether the results of an experiment supported or negated a hypothesis.  For subjects in the “control group,” this problem was styled as one involving the effectiveness of a new skin-rash treatment.  As expected, only the most highly numerate subjects were likely to correctly interpret the experimental data.

Another version of the problem was styled as an experiment involving the effectiveness of a ban on carrying concealed weapons.  In this condition, high-numerate subjects again did much better than low-numerate ones but only when the data properly construed generated an ideologically congenial result. When the data, properly construed, supported an ideological noncongenial result, high numerate subjects latched onto the incorrect but ideologically satisfying heuristic alternative to the logical analysis required to solve the problem correctly.

Because high-numeracy subjects used their quantitative reasoning powers selectively to credit evidence that low-numeracy subjects could not reliably interpret, high-numeracy subjects ended up more likely on average to disagree than low-numeracy ones.  The impact of science comprehension in magnifying cultural polarization on climate change is consistent with exactly this pattern of ideologically opportunistic critical reasoning.

IV. One synthesis.  The studies investigating the interaction of PIT and CCT support (provisionally, as always!) a cluster of interrelated descriptive, normative, and prescriptive conclusions. 

 A. The tragedy of the science communication commons. The science communication problem is a result not of too little rationality but rather too much.  Because the beliefs and actions of any ordinary individual member of the public can’t affect climate change, neither she nor anyone she cares about will be put at risk if she makes a mistake in interpreting the best available evidence.  But if such a person forms a position that is out of keeping with the dominant one in her affinity group, the consequences—in estrangement from those she depends on for support—can be extremely detrimental.  It thus is individually rational for individuals to attend to information on societal risks that more reliably connects their beliefs to those shared by others with their defining outlooks than to the best available evidence.  The more proficient they are in reasoning about scientific evidence, moreover, the more successful they’ll be in forming and persisting in such beliefs.

Such behavior, however, is collectively irrational. If all individuals pursue it simultaneously, they will not converge or converge as quickly as they should on valid evidence essential to their welfare.  Yet this predictable consequence will not change the psychic incentive that any individual faces to form group- rather than truth-convergent beliefs.

The science communication problem thus involves a distinctive form of collective action problem—a tragedy of the science communications commons.

B. Pathological meanings. The signature attributes of the science communication problem—cultural polarization magnified by science comprehension—are not normal. The number of risk perceptions and like beliefs that display this pattern relative to the number that do not is tiny. On issues from fluoridation of water to the safety of medical x-rays, the most science comprehending individuals do converge, pulling along those who share their cultural outlooks.  This process of knowledge transmission breaks down only when positions on disputed issues become symbols of membership in and loyalty to competing groups—at which point the stake ordinary individuals will have in forming group-convergent beliefs will systematically dominate the stake they have in forming truth-congruent ones. 

This sort of entanglement of risk perceptions and culturally antagonistic meanings is a pathology—both in the sense of being harmful and in the sense of being unusual or opposed to the normal, healthy functioning of collective belief formation.

C. “Scicomm environment protection” as a public good.  The health of a democratic society depends on the quality of the science communication environment just as the health of its members depends on the quality of the natural one.  Antagonistic cultural meanings are a form of pollution in the science communication environment that disables the exercise of the rational faculties that ordinary citizens normally and reliably use to discern what’s known to science. Protecting the science communication environment from this toxin is a public good essential to enlightened self-government. 

By  using reason, we can protect reason from the distinctive threats that the science communication problem comprises.

Saturday
Feb082014

Cross-cultural cultural cognition road trip

Here's my schedule for next week and a half -- or at least parts of it.  

 

Stop by if in the neighorhood -- otherwise I'll send postcard reports now & again!

(Actually, I'm surprised that I'm giving the same talk at Cardiff & Nottingham--but I doubt that I really will!)

Wednesday
Feb052014

Science journalists: Ask not what the science of science communication can do for you . . . 

A reflective correspondent & friend wrote to me to ask what I made of the relative inattention of science journalists to the empirical study of science communication--& what might be done to remedy this.  She had many great ideas for how to make such work more familiar and accessible to them.  I had a somewhat different, but I think complementary reaction:

I think it is unsurprising how infrequently empirical research is featured in social media and similar fora in which science journalists exchange ideas.

The explanation, moreover, isn't merely that how to communicate to curious members of the public is only 1 of the n things that science of science communication studies. It's that those who are engaged in scientifically studying science communication -- including the sorts science journalists do -- aren't trying to answer the questions that journalists most often are, and should be, asking.  

The journalists' questions relate to their own craft norms -- the professional understandings that they absorb and generate and transmit and that guide and animate them.  They argue about various ones of them all the time, in many cases persistently (or at least intermittently; they have jobs—very interesting ones!) over long periods of time.

That means that they have questions that in the judgment of those endowed with the requisite, experience-informed professional judgment admit of more than one plausible (but not, the debate presupposes, more than one correct or best) answer.  

Under those circumstances, arguments will be interminable and make no progress. Evidence is needed -- not as a substitute for the exercise of professional judgment but as raw material for it to operate on.  

Well, very very few (maybe zero) scholars are using empirical methods to answer questions of consequence to the quality and evolution of science journalism's' craft norms.  

Most “science of #scicomm” scholars, of course, aren't studying science journalism at all.  

Others actually are-- but to answer questions that are parts of the scholarly conversations those researchers are part of.  They have converged on (or joined) collective inquiries into how one or another general mechanism—cognitive, political, or both—operate to shape the path of scientific information through the media and to the public.  Their research (much of which is excellent!) is, nearly always, trying to answer questions that admit of more than one plausible (but not more than one correct or best) answer about those processes—not about how science journalists can be excellent science journalists.

Maybe sometimes these scholars mistakenly think that what they are studying when they examine these more general dynamics of communication supplies the "answers" to the questions science journalists pose about their own craft norms. Other times they present their work this way knowing full well that it is a mistake (it's a very disturbing spectacle when they do).

In either case, science journalists react negatively -- "that's ridiculous" or (in a refrain that becomes a chorus after an event like NAS “science of #scicomm” colloquia) "that's completely irrelevant to what we do; I've not learned a thing!"  ...

Well, the problem actually isn't in the researchers here; it's in the science journalists!

The mistake is in part for them to think that "everything is about them": the science of science communication isn't one thing—it’s 7 (± 2).  

But even more fundamentally, it is a mistake for the science journalists to think that anyone besides them can be expected to create the scientific insight that is relevant to their craft!

No one else knows (or likely genuinely cares: nonjounralists don't even know enough to care) what the empirical questions of consequence to science journalism's craft norms are. No one else can reliably recognize the form of evidence that helps professional conversation about those questions to advance; only those with the sense of the professional science journalists can.

This isn't to say that individual journalists must start designing studies and collecting data.  Rather it is to say that they must exercise control over research using empirical methods so that it in fact is designed to address questions of consequence to them and uses designs that can support inferences relevant to the sorts of questions experienced science journalists recognize as admitting of more than one plausible (but not more than one correct or best) answer.  

Science journalists will often observe, correctly, that “science of #scicomm” scholars' work on general mechanisms are generating insights of indisputable relevance to their craft.  But the journalists--not the scholars--will know when that's so.  

In that situation, moreover, science journalists will be filled with hypotheses--ones that are concrete and relevant to those who share their situation sense—about how those mechanisms might interact with their professional craft norms.

Even if they did not themselves create the studies, they will recognize when one designed to test such a hypothesis is genuinely capable of supporting inferences on the basis of which they can will know more than they otherwise would have

They are the ones, then, who must direct the empirical enterprise that is the science of science communication for science journalists.

How?

There are an infinite number of ways -- but none of them consists in passively consuming journal articles.

Here as in the other practical domains in which a science of science communication is needed, the answer of the thoughtful and honest scholar who actually wants to help when asked (over & over) by communicators to "so what should we do" is, "You tell me -- and I will help by measuring what you confirm for me is the right thing!"

Monday
Feb032014

Want to know what empirically *informed* vaccine risk communication looks like?

Drawing on material from the CCP Vaccine Risk and Ad Hoc Risk Communication study, the last few posts reported experimental results on the potentially deleterious effects of empirically uniformed risk communication.

By “empirically uninformed risk communication,” I mean to refer to information that accurately conveys the safety and efficacy of vaccines but that embeds that information in mischaracterizations of the extent, nature and consequences of public hostility to universal childhood immunization.  Ad hoc risk communication of this sort—which abounds in the media and on the internet—itself can produce misunderstandings that undermine the motivation to cooperate with universal immunization programs and that drag childhood vaccinations into the reason-effacing maelstrom of cultural conflict (Kahan 2013).

What’s more, this style of risk communication distracts those who want to promote public understanding of vaccine safety from the need to perfect empirically informed strategies for achieving this critical goal.

Such research is well underway.

As discussed in the Report, it consists not in general public opinion surveying: opinion polls lack sufficient discernment to identify the sources and mechanisms of genuine vaccine hesitancy in the public, and are not a reliable or valid measure of vaccine behavior by parents.

The most valuable research now being conducted on vaccine hesitancy uses focused and fine-grained methods tied to actual behavior.

Dr. Douglas Opel and his collaborators (2011a, 2011b, 2013), e.g., have devised—and are refining—an attitudinal screening instrument that can be used to predict parents’ willingness to obtain timely vaccinations for their children.  Such a screening device would be comparable to ones used in diverse fields from credit assessment (e.g., Klinger, Khwaja & Lamonte 2013) to organizational staffing (e.g., Ones et al. 2007), not to mention to ones used to predict or diagnose disease vulnerability (e.g., Wilkins et al. 2013).

If perfected, such an instrument could be used by physicians to identify genuinely vaccine-hesitant parents, by public health administrators to detect local pockets of under-vaccination that pose a genuine public health threat, and by researchers to develop genuinely effective risk communication materials (Sadaf et al. 2013; Opel et al. 2012)—ones that can convey factually accurate information to the right people and avoid all the hazards associated with blunderbuss, empirically uninformed ad hoc risk communication.

Of course, the public health risks posed by local enclaves of under-vaccination, as well as by misinformers who sow unfounded anxiety in these and other communities, are ones that merit response right now.

The public health establishment doesn’t have as much evidence as it needs to address these dangers as effectively as it could.

But as such evidence is being developed by valid empirical research, those who favor universal childhood immunization should make intelligent use of the currently best available evidence to promote constructive and open-minded public engagement with valid information on vaccine risks and benefits.

If you want an example, I urge you to read this excellent essay from Moms Who Vax:

It may surprise you to know that the anti-vaccine movement has long claimed to speak for parents in this country when it comes to vaccines. And it is because they are so vocal and we are so, well, busy living our lives, that legislators, government officials, and even some public health organizations think that anti-vaccine activists who believe the MMR causes autism and that the decline of vaccine-preventable disease is due to "better hygiene" represent parents as a whole, when it comes to immunization in this country.

The vast--vast--majority of us choose to vaccinate our children for two reasons: one, we don't want our children to suffer from a preventable disease, possibly become seriously ill, or even die; and two, we don't want any of those things to happen to our neighbors either. Here's the problem: we don't talk about it. I suspect this is because we consider it commonsense. One mother on this blog wrote a post titled: "There's an Anti-Vaccine Movement?" because it had never occurred to her before she had children that people would willingly forgo something that has nearly eliminated one of the most dreaded diseases in human history (polio) and saved the lives of countless children and adults from other diseases that, if not kept in check by widespread immunization, cause unimaginable amounts of suffering.

We never thought we'd have to advocate for something that saves lives, especially the lives of children.

But here we are, and our complacency and our silence has allowed a fringe minority to sit at the table of public health in our place. And there are now consequences for our silence.

If I sound a little more passionate than usual, it's because I'm angry. We must rise up as a group and take back the conversation. … Right now, there are legislators in Oregon who believe that millions of parents do not believe in vaccination.... Let's prove them wrong. … Let's do this--let's go letter for letter, and beyond. Let's make sure the people who make our immunization law know that we are here, that we care, that we are the 95%.

In addition to being much more eloquent and inspiring than the boilerplate “growing crisis of confidence” and “creeping anti-science” tropes that dominate ad hoc risk communication, this essay brilliantly exploits dynamics that a reflective communicator would surmise are important based on existing, evidence-based understandings of science communication:

  • Because individuals (quite sensibly!) form their assessments of risk by observing how others who are like situated are responding (Kasperson et al. 1988), the (clear, unassailablefact that the “vast majority” of U.S. parents arrange for their children to receive all recommended immunizations is itself an important and effective piece of evidence to communicate to parents—many of whom are likely to become fearful if bombarded by thoughtless repetition of the false message that an “epidemic of fear” has led to an erosion in immunization rates. 
  • Similarly, people tend to contribute voluntarily to collective goods when they perceive that others are doing so but to refrain when they think that others are shirking or free-riding (Bowles & Gintis 2013).  So again, the message here—“we are the 95%” who contribute—is spot on.  It reinforces reciprocal motivations to contribute to the collective good of herd immunity (Hershey et al. 1994) rather than undermines them, as empirically uninformed risk communicators do by proclaiming—falsely—that a “large and growing number” of “otherwise mainstream parents” are refusing to vaccinate their children.   
  • The communication manifests the willingness of the vast majority who are contributing to the public good of herd immunity to contribute to another: condemnation of the few who are free-riding. Experimental behavioral economics shows that individuals are most likely to converge on and stick to a high-cooperation equilibrium in a collective action setting when they can observe that other individuals are moved voluntarily to accept the burden of informally punishing (e.g., by shaming) the relatively few selfish actors who free-ride.  In contrast, demands for increased, centrally administered formal punishments can vitiate reciprocal motivations by convey an expectation that the disposition to voluntarily comply is lower than it actually is (Kahan 2004)—another of the many sources of scientific insight that empirically uninformed risk communicators ignore
  • Finally, this essay is inspiringly inclusive.  It doesn’t use the cheap trick of ramping up one cultural group’s indignation by attributing socially undesirable behavior to a competing one. Characteristic of communications that—again, falsely—attribute vaccine hesitancy to one or another recognizable cultural or political group, this style of advocacy is what threatens to envelop childhood vaccines in exactly the same forms of persistent cultural conflict that inhibit public recognition of the best available evidence on myriad issues—from climate change to nuclear power to the HPV vaccine.

We need to acquire more valid empirical evidence on how to communicate vaccine risks and benefits.

But we also need to act in an informed way in the meantime.

Another of the many defects of empirically uninformed vaccine risk communication is that it diverts attention away from the most instructive and inspiring examples of how public-spirited citizens and scientists are pursuing these objectives.

References

Bowles, S. & Gintis, H. A cooperative species : Human reciprocity and its evolution (Princeton University Press, Princeton, 2013).

Kahan, D.M. The Logic of Reciprocity. in Moral Sentiments and Material Interests: The Foundation of Cooperation in Economic Life (ed. H. Gintis, S. Bowler & E. Fehr) 339-378 (MIT Univ. Press, Cambridge, MA, 2004).

Kahan, D.M. A risky science communication environment for vaccines. Science 342, 53-54 (2013).

Kasperson, R.E., et al. The Social Amplification of Risk: A Conceptual Framework. Risk Analysis 8, 177-187 (1988).

Klinger, B., Khwaja, A. & LaMonte, J. Improving credit risk analysis with psychometrics in Peru. (Inter-American Development Bank, 2013).

Ones, D.S., Dilchert, S., Viswesvaran, C. & Judge, T.A. In support of personality assessment in organizational settings. Personnel Psychology 60, 995-1027 (2007).

Opel, D.J., Mangione-Smith, R., Taylor, J.A., Korfiatis, C., Wiese, C., Catz, S. & Martin, D.P. Development of a survey to identify vaccine-hesitant parents: The parent attitudes about childhood vaccines survey. Human Vaccines 7, 419-425 (2011a).

Opel, D.J., Robinson, J.D., Heritage, J., Korfiatis, C., Taylor, J.A. & Mangione-Smith, R. Characterizing providers’ immunization communication practices during health supervision visits with vaccine-hesitant parents: A pilot study. Vaccine 30, 1269-1275 (2012).

Opel, D.J., Taylor, J.A., Mangione-Smith, R., Solomon, C., Zhao, C., Catz, S. & Martin, D. Validity and reliability of a survey to identify vaccine-hesitant parents. Vaccine 29, 6598-6605 (2011b).

Opel, D.J., Taylor, J.A., Zhou, C., Catz, S., Myaing, M. & Mangione-Smith, R. The relationship between parent attitudes about childhood vaccines survey scores and future child immunization status: A validation study. JAMA pediatrics 167, 1065-1071 (2013)

Sadaf, A., Richards, J.L., Glanz, J., Salmon, D.A. & Omer, S.B. A Systematic Review of Interventions for Reducing Parental Vaccine Refusal and Vaccine Hesitancy. Vaccine 31, 4293-4304 (2013)

Wilkins, C.H., Roe, C.M., Morris, J.C. & Galvin, J.E. Mild physical impairment predicts future diagnosis of dementia of the Alzheimer’s type. Journal of the American Geriatrics Society 61, 1055-1059 (2013).

Friday
Jan312014

The culturally polarizing effect of the "anti-science trope" on vaccine risk perceptions 

The “ ‘anti-science’ trope” refers to a common theme in ad hoc risk communication that links concern about vaccine risks to disbelief in evolution and climate skepticism, all of which are cited as instances of a creeping hostility to science in the U.S. general public or at least some component of it.

In the last post, I presented evidence, collected as part of the CCP Vaccine Risk Perception study, that showed that the trope has no meaningful connection to fact. 

Those who accept and reject human evolution, those who believe in and those who are skeptical about climate change, all overwhelmingly agree that vaccine risks are low and vaccine benefits high.

The idea that either climate change skepticism or disbelief in evolution denotes hostility to science or lack of comprehension of science is false, too. That’s something that a large number of social science studies show.  The CCP Vaccine Risk study doesn’t add anything to that body of evidence.

But the CCP Vaccine Risk study did examine how differences in science comprehension and religiosity, which interact in an important way in disputes over climate change and evolution, don’t have any meaningful impact on perceptions of vaccine risk perceptions.

In addition to examining whether there was any factual substance to the anti-science trope, the CCP Vaccine Risk Perception study also investigated what the impact of the trope is—or at least could be if it were propagated widely enough—on public opinion.

For that purpose, the study used experimental methods. The experiment had three key elements.

First was a measurement of subjects’ cultural predispositions toward societal risks.

 I’ve actually described the strategy used to do so in several earlier posts.  But basically, the experiment used an “interpretive community” strategy, in which unobserved or latent group predispositions are extracted from subjects perceptions of a host of societal risks that are known to divide people with diverse cultural and political outlooks. This approach, as I’ve explained, furnishes the “highest resolution” for measuring the influence group predispositions might be having on perceptions of a risk on which there is reason to believe the impact might be small.

 

That analysis identified two cross-cutting or orthogonal dimensions along which risk predispositions could be measured.  I labeled them the “public safety” and “social deviancy” dimensions, based on their respective indicators (various environmental risks, guns, second-hand smoke in the former case; legalization of marijuana and prostitution and teaching of high school sex ed in the latter). 

 Subjects in the diverse 2,300-person sample of U.S. adults could thus be assigned to one of four “interpretive communities” (ICs) based on their score relative to the mean of both of these two “risk perception dimensions”: IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  The intensity of the study subjects' commitment to one or the other of these groups can be measured by their scores on the public-safety and societal-deviancy risk-perception scales.

The second element was exposure of the subjects to examples of “ad hoc risk communication.”

The subjects were assigned to experimental conditions or groups, each of which read a different communication patterned on information in the media or internet.

One of these communications used the “anti-science trope.” Patterned on real-world communications (including ones reproduced in the Appendix to the Report), it was in the form of an op-ed that described disbelief in evolution, climate skepticism, and the belief that vaccines cause autism as progressive manifestations of a mutating “anti-science virus.” As is so for most real-world communications embodying the anti-science trope, the experiment communication displayed an unmistakably partisan orientation and conveyed contempt for members of the public who are skeptical of climate change and disbelieve evolution.

The third element was measurement of the subjects’ perceptions of vaccine risks and benefits.

The study used a large battery of risk and benefit items, which were combined into a highly reliable scale, “PUBLIC_HEALTH” (Cronbach’s α = 0.94), scores of which were transformed into z-scores (i.e., normalized so that increments reflected standard deviations from the mean) and coded so that lower ones denoted relatively negative assessments of vaccines and higher scores relatively positive ones.

In the experiment, then, the risk perceptions of subjects exposed to different forms of “ad hoc risk communication” were compared to the perceptions of survey participants, who were assigned to read a news story unrelated to vaccines and whose members served as the “control” group.

The results . . . .

As previewed in an earlier blog post, the study found that among members of the control group there was no practically meaningful relationship between  vaccine risk perceptions and the cultural risk predispositions measured by the “public safety” and “social deviance” IC dimensions.  IC-αs (“high public-safety,” “low social-deviancy”) scored highest on PUBLIC_HEALTH and IC- δs the lowest.  But the difference between them was trivially small—less than one-third of one standard deviation of the mean score.

As a measure of the practical difference in these scores, the predicted probability of agreeing that the “benefits of obtaining generally recommended childhood vaccinations outweigh the health risks” was estimated to be 84% (± 3%, LC = 0.95) for a typical IC‑α and 74% ( ± 4%) for a typical IC‑δ.

This was consistent with the findings of the Vaccine Risk Perception study's survey component generally, which found that there is broad-based consensus, even among groups that are bitterly divided on issues like climate change and evolution, that vaccine benefits are high and their risks low.  As of today, at least, vaccine risks are not culturally polarizing.

But that could change, the experiment results suggested.

This very modest difference in the perceptions of subjects displaying the IC-α  and IC-δ risk disposition widened significantly among their counterparts in the “anti-science trope” condition. Exposure to the “anti-science” op-ed also drove a wedge between subjects displaying the IC-β (“high public-safety,” “high social-deviancy) and IC-γ (“low public-safety,” “low social-deviancy”) dispositions, groups whose scores on the PUBLIC_HEALTH scale were indistinguishable in the control.

The practical significance of the difference can be illustrated by examining the impact of the experiment on the predicted probability of agreement with the item measuring “confidence in the judgment of the American Academy of Pediatrics that vaccines are a ‘ safe and effective way to prevent serious disease.’ ” Subjects responded to this item immediately after reading a statement issued by the AAP on vaccine testing and safety. The predicted probability that a subject with a typical IC-δ disposition would indicate a positive level of confidence dropped from 73% (± 4%, LC = 0.95) in the control to 64% (± 7%, LC = 0.95) in “anti-science”; the gap between the predicted probability of a positive assessment by a typical IC-δ and a typical IC-α grew 14% (± 9%) in the two conditions. The gap between the typical IC-β and both the typical IC-α (7%; ± 7%, LC = 0.95) and typical IC-δ (6%; ± 6%, LC = 0.95) grew, too, but by a more modest level. As one would expect, similar divisions characterized responses to other items in the PUBLIC_HEALTH scale.

There was no similar decrease in the predicted probability that a typical IC-δ would express a positive level of confidence in the other experiment conditions, one of which which featured a composite news story proclaiming an impending public health crisis from “declining vaccine rates,” and another of which a communication patterned on a typical CDC press release that conveyed accurate information on the high and steady level of vaccine rates in the U.S. in the last decade.  But as discussed in a previous post, subjects in the “crisis” condition, not surprisingly, grossly overestimated the degree of parental resistance to universal immunization—an effect that could negatively affect reciprocal motivations to contribute to the public good of herd immunity.

It is important to realize that the polarizing impact of the “Anti-science” op-ed resulted both from the positive effect it had on the vaccination attitudes of IC-α subjects and the negative effect it had on IC-δ ones. The overall effect of the “Anti-science” treatment was negligible.

The practical importance of the result, then, turns on the significance attached to the intensified levels of disagreement among subjects of diverse outlooks.

Previous CCP studies, including one involving controversy over the HPV vaccine, suggest that the status of a putative risk source as a symbol or focus of cultural contestation is what disrupts the social processes that ordinarily result in public convergence on the best available evidence relating to societal and health risks.

If this is correct, then any influence that intensifies differences among such groups should be viewed with great concern.

The "anti-science trope," in sum, is not just contrary to fact.  It is contrary to the tremendous stake that the public has in keeping its vaccine science communication environment free of reason-effacing forms of pollution.

Thursday
Jan302014

How are climate skepticism, disbelief in evolution & vaccine hesitancy related?

The dominant theme of ad hoc risk vaccine risk communication warns of a “growing wave of public resentment and fear” that has induced a “large and growing number” of “otherwise mainstream parents” to refuse to vaccinate their children.

As discussed in the last post, this trope is not based on fact: there hasn't been an erosion in immunization rates”; on the contrary, coverage for all recommended childhood vaccines has held steady at 90% (the HHS "healthy person" target) or above for over a decade.

And while there's zero evidence of a " “growing crisis of public confidence in vaccines at present, emphatic assertions that there is one can be shown to induce misunderstandings and confusion inimical to the willingness of people to make voluntary contributions to public goods--like the herd immunity associated with universal immunization.

A secondary theme of ad hoc risk communication is the "anti-science" trope.  This claim links "growing" concern over vaccine safety to disbelief in evolution and skepticism toward climate change, all of which are depicted as evidence of a creeping hostility to science in the general public.

The CCP Vaccine Risk Perception study found this assertion, too, to be both contrary to fact and antithetical to maintaining the existing, broad-based public consensus in favor of universal immunization.

Below is a section of the Report that presents survey evidence on the relationship between vaccine risk perceptions, on the one hand, and climate change skepticism, disbelief in evolution, and science comprehension generally, on the other.  Tomorrow I'll post material relating to the Study's experimental component, which illustrates the potential of the "anti-science trope" to generate cultural conflict over vaccines.

A. Some benchmarks: evolution and climate change, science comprehension and religiosity.  

As emphasized, the aim of the survey component of the study was to evaluate the nature of the general public’s perception of childhood vaccine risks. Is there a shared or dominant affective orientation toward vaccine safety in the U.S. public? Or do childhood vaccines provoke mixed and opposing reactions—and if so, among whom?

Meaningful answers to these questions require an intelligible reference point with which to compare the survey responses. Dispute over universal vaccination laws—provisions that make immunization a condition of school enrollment, subject to medical or religious and in some states moral-objection “exemptions”—are frequently likened to conflicts over acceptance of mainstream science, including the teaching of evolution in public schools and the adoption of policies to mitigate the environmental impact of climate change. Associated with religious, cultural, and political divisions, the intensity and character of these conflicts can be used to help assess the intensity and character of any divisions of opinion observed on childhood vaccine risks.

The study measured study participants’ beliefs about both evolution and global warming. On evolution, subjects responded to an item from the National Science Foundation (2012) “Science Indicators” battery, which is conventionally used to measure science literacy. That item instructs respondents to respond to the statement “Human beings, as we know them today, developed from earlier species of animals.” In line with many public opinion polls (e.g., Newport 2012), 56% of the survey respondents classified this statement as “true,” and 44% as “false.”

On climate change, 52% of the survey respondents indicated that they believe scientific evidence supports the proposition that the earth’s temperature has been increasing in “the last few decades” as a result “of human activity such as burning fossil fuels.” Thirty percent indicated that they did not believe there was “solid evidence” of increasing global temperatures over the “past few decades,” while another 18% indicated that they believed there was “solid evidence” of warming but that the cause was “mostly. . . natural patterns in the earth’s environment,” as opposed to “human activity.” These figures, too, are in line with recent national opinion surveys (Silver 2013).

Irrespective of their responses to these items, however, the overwhelming majority of survey respondents agreed with the proposition that the “health benefits of obtaining generally recommended childhood vaccinations outweigh the health risks” (BALANCE). Eighty percent of the respondents who believe in human-caused climate change agreed with this proposition. So did 81% of those who believe the earth’s temperature has increased as a result of “natural patterns,” and 73% of those who believe the earth’s temperature has not increased in recent decades. Eighty percent of the respondents who believe in evolution and 77% who do not  (a difference smaller than the survey margin of error) likewise indicated that they agree the benefits of childhood vaccinations outweigh their risks (Figure 5).

Study participants also responded to items measuring both their religiosity and their knowledge of and facility with scientific evidence. The former was assessed with a scale that aggregated self-reported church attendance, frequency of prayer, and “importance of God” in the respondents’ lives (α = 0.86). Subjects’ “science literacy” was assessed with 11 items from the NSF’s Science Indicator battery, which is conventionally used to study public understanding of science in the U.S. and abroad (NSF 2012). In addition, subjects completed a ten-item version of the Cognitive Reflection Test (Frederick 2005; Toplak, West & Stanovich 2013), which assesses the motivation and capacity to consciously interrogate one’s views on the basis of available information, a critical-reasoning disposition integral to forming evidence-based beliefs (Toplak, West & Stanovich 2011).

The NSF and CRT items formed a reliable scale (Cronbach’s α = 0.82), which can be interpreted as measuring a “science comprehension” aptitude (Kahan, Peters et al. 2012). Consistent with previous studies (Pennycook 2012, 2013; Shenhav, Green & Rand 2011; Gervais & Norenzayan 2012), there was a modest negative correlation between the religiosity and science comprehension (r = -0.26, p < 0.01).

Religiosity and science comprehension also were meaningfully—but not straightforwardly—associated with the study subjects’ positions on evolution and climate change. Science comprehension was modestly associated (r = 0.28, p < 0.01) with belief in evolution and weakly associated with belief in human-caused climate change (r = 0.10, p < 0.01) for the sample as a whole (including both survey and experiment subjects). But the impact was moderated by subjects’ religiosity: among those low in religiosity, higher science comprehension substantially increased belief in evolution and in human-caused global warming; among those high in religiosity, however, higher science comprehension had next to no impact on belief in evolution and substantially reduced belief in human-caused global warming (Figure 7).

The interaction between religiosity and science comprehension is not surprising. Science literacy and critical reasoning dispositions have been found to magnify cultural and ideological predispositions toward global warming (Kahan, Peters et al. 2012; Kahan 2013b). So it stands to reason that they would have the same impact on predispositions associated with religiosity, which plays a comparable role to shared cultural and political outlooks in the web of social relationships that orient individuals toward what is known by science. “Belief in evolution” is not a reliable indicator of either a scientifically literate understanding of evolutionary mechanisms (Schtulman 2006; Bishop & Anderson 1990) or the species of science literacy measured generally by the NSF Science Indicators. Rather, how one responds to the question “do you believe in human evolution” indicates a form of identity that features religiosity (Roos 2012). It is perfectly plausible that the significance of “disbelief” in evolution as an expression of personal identity would be unaffected by science knowledge—or possibly even reinforced by habits of mind associated with critical reasoning. Indeed, experimental evidence supports this inference (Lawson & Worsnop 2006).

These relationships—which are integral to making sense of the salience and ferocity of societal conflict over climate change and over evolution—were absent from the views of the survey respondents toward childhood vaccines (Figure 7). Both science comprehension (r = 0.12, p < 0.01) and religiosity (r = -0.14, p < 0.01) displayed only weak relationships with the battery of items that formed the PUBLIC_HEALTH scale. There was an interaction between religiosity and science comprehension in the survey respondents’ scores on the scale, but it was small in size and, more importantly, moderated only the intensity of the positive orientation that subjects of varying levels of religiosity expressed toward childhood vaccines (App. 1, Table 1).

A more detailed examination of the participants’ responses to the various survey items follows. Unsurprisingly, there is unanimity on none. Nevertheless, understood in relation to contested societal issues that feature conflict among large and readily identifiable societal groups, the uniform and uniformly supportive margins of agreement reflected in survey responses is of fundamental interpretive significance. As will become even more apparent, in probing the nature of opposition to universal childhood immunization, one is necessarily assessing the attitudes of a segment of the population that is small in size and that defies identification by the sorts of characteristics associated with recognizable cultural styles in American society.

 

To download Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment, click here.

Tuesday
Jan282014

The logic of reciprocity--and the illogic of empirically uninformed vaccine risk communication

One of the aims of the CCP Vaccine Risk Perception and Ad Hoc Risk Communication study was to examine the impact of empirically uninformed vaccine risk communication.

By “empirically uniformed risk communication,” I don’t mean vaccine-risk misinformation, such as the claim that childhood vaccines cause autism. I take as a given that false assertions like that generate unwarranted public concern.

Rather, I’m using “empirically uniformed risk communication” to denote information that accurately conveys the risks of childhood vaccines--likely for the very appropriate purpose of counteracting misinformation-- but that nevertheless embeds that information in empirically insupportable representations about the extent, sources, and consequences of public unease toward universal immunization.

Disseminated by journalists, advocates, and even some public health professionals, this kind of vaccine-risk communication is the type that ordinary members of the public are in fact most likely to be exposed to.

It’s core message is that public health in the U.S. is being threatened by a “growing crisis of public confidence” in vaccines.  No longer confined to “[t]he fringe who don’t believe in medicine for religious reasons,” a “growing distrust of vaccinations” is now sweeping across “our nation’s parents,” we are told (& told & told & told).

The resulting “erosion in immunization rates” is predictably ... leading to the resurgence of diseases considered vanquished long ago” including whooping cough and measles. “From Taliban fighters to California soccer moms,” one source concludes, “those who choose not to vaccinate their children against preventable diseases are causing a public health crisis.”

The CCP study didn’t purport to test these claims. Instead, it deferred to sources that employ valid empirical methods specifically suited to measuring immunization rates and the incidence of childhood diseases.

These sources belie the claim that vaccination rates are declining at all, much less “eroding.”

According to data compiled by the U.S. Centers for Disease Control, coverage for recommended childhood immunizations—including MMR, pertussis, Hepatitis-b, and polio—have all been holding steady at or above 90% (the target threshold) for well over a decade (CDC 2013a, 2008, 2006). The proportion of children receiving no vaccinations has persisted at or below 1%, despite the ready availability of nonmedical “exemptions” from state-administered universal immunization policies for objecting parents.

Rather than a “large and growing number” of “otherwise mainstream parents” refusing to vaccinate their children, the CDC reports (in annual press releases, the language of which varies little from year to year) that“ ‘nearly all parents are choosing to have their children protected against dangerous childhood diseases’ ” (CDC 2010)

There are local enclaves, the CDC cautions, in which vaccination rates are significantly lower than the national average. These enclaves are often the site of recurring localized outbreaks of diseases, like measles, which public health officials have deemed eliminated in the United States but which can be introduced into such communities by individuals infected during travel abroad (CDC 2013b).

Fortunately, “[h]igh MMR vaccine coverage in the United States (91% among children aged 19–35 months),” the CDC states, “limits the size of [such] outbreaks,” which averaged 60 cases per year over the last decade (ibid.).

The incidence of whooping cough, which has not been eliminated in the United States, is also likely higher in low-vaccination enclaves (Atwell et al. 2013; Glanz et al. 2013; Omer et al. 2008). But “[p]arents refusing to get their children vaccinated,” according to the CDC, are “not the driving force behind the large scale outbreaks” of this disease in recent years (CDC 2013c). In addition to “increased awareness, improved diagnostic tests, better reporting, [and] more circulation of the bacteria,” the CDC (2013c) has identified “waning immunity “from an ineffective booster shot as one of the principal causes, a view shared by public health experts (Cherry 2012).

Pockets of resistance to vaccination pose a serious and unmistakable public health concern. They merit considered attention informed by empirical methods suited to assessing the influences that generate them, the contributions they make to the incidence of childhood diseases, and the measures that might be employed to counteract and contain them (Opel 2011, 2012; Mnookin 2011; Omer et al. 2008).

But the existence of anti-vaccine enclaves and the dangers they pose do not furnish empirical support for asserting that there is a “growing crisis of public confidence” in childhood vaccines, that “immunization rates with MMR have dropped in . . . the US,” or that a “rising tide of … vaccine reluctance” has generated “a resurgence of diseases gone so long that some doctors don’t even recognize them.

Such claims reflect not an “epidemic of fear” among ordinary parents, but an epidemic of hyperbole among a diverse collection of actors resorting to ad hoc, empirically uninformed alternatives to genuinely evidence-based forms of risk communication.

I'm sure those engaging in empirically uninformed vaccine risk communication are not doing so in bad faith.

Most probably just don’t know what they are talking about—in part because of the prevalence of empirically uninformed vaccine risk communication.

But some probably do realize that they are in fact grossly mischaracterizing the extent of vaccine avoidance in the U.S., and misattributing to it disease outbreaks that in fact stem from other causes such as the ineffective pertussis booster shot.

They probably figure that this fact-disconnected style of risk communication is okay because it will grab people’s attention and stir them to anger at parents who are not vaccinating their children and definitely should be.

But that way of thinking is empirically uninformed, too.  Indeed, the scientific study of science communication suggests that understating the high level of vaccination in the U.S. could actually weaken public support for and cooperation with universal immunization programs.

The “herd immunity” associated with universal childhood vaccination is a collective good (Olson 1965). That is, by complying with universal vaccination policies, parents confer a benefit—reduced risk of contracting a disease—not only on their own children but also on those who as a result of age, medical restrictions, or material disadvantage have not been able to secure the protection that such vaccinations confer.

In collective actions settings—from tax compliance to recycling, from voting to observation of informal norms on picking up one’s children on time from daycare centers—individuals tend to behave like moral and emotional reciprocators (Gintis et al. 2004). That is, rather than engage in purely self-interested calculation, they are motivated to contribute voluntarily to collective goods if they perceive that others are doing so, but to refrain from contributing if they think free-riding is widespread.

This dynamic makes it imperative that people not be induced to underestimate the extent to which others are voluntarily contributing to a collective good. If they do, a higher number of individuals will themselves refrain from contributing—behavior that can be expected to induce others to do the same, generating a self-reinforcing spiral of non-cooperation (Kahan 2004).

The logic of reciprocal cooperation implies that people who believe that others are refraining from getting vaccinated and instead free-riding on the contributions of others to herd immunity will themselves be less willing to get vaccinated. One experimental study using self-report data suggested that students exposed to information that suggested other students were forgoing vaccination and effectively free-riding on the decisions of others to get their flu shots responded in exactly this way (Hershey et al. 1994).

The results of CCP Vaccine Risk Perception study suggest that members of the general public substantially underestimate childhood-vaccine coverage. Asked to estimate “about what percentage of U.S. children (age 19-35 months) received the recommended vaccinations for childhood diseases” in recent years, only 9% of the survey subjects indicated “90% or above”; the median estimate was “70-79%.” In addition, approximately 40% of the sample indicated that the “trend in the rate of vaccination for U.S. children (age 19-35 months)” had gone down either “a little” or “a lot.”

The survey participants likewise grossly overestimated the proportion of children who receive no vaccinations. Only 9% correctly put the figure at “1% or less.” The median response was “2%-10%.” Over one-third of the sample selected either 11% to 20% or 21% to 30%.

Because of the contribution that reciprocity makes to individuals’ motivations to contribute to public goods like herd immunity, this kind of misunderstanding is not good.

Even worse, the experimental component of the CCP study found that individuals’ levels of misunderstanding grew when they were exposed to empirically uninformed vaccine risk communication.

In the experiment, subjects were assigned to one or another condition, members of which read different materials patterned on real-world media and internet sources.  Those assigned to the “Crisis” condition read a news that assert “growing” parental resistance to vaccination and a resulting decline in vaccination rates. Those assigned to the  “Anti-science” condition read an op-ed commentary that similarly implied that vaccination rates were declining based on fear of vaccine side-effects among individuals disposed to distrust scientific information on issues like evolution and climate change.

Subjects in both the “Crisis” and “Anti-science” conditions underestimated national vaccination coverage.  They also were more likely to report that vaccine rates had gone down in recent years.

Subjects in the “CDC” condition, in contrast, received a news story that quoted CDC officials accurately indicating that “vaccination coverage rates . . . have remained stable at or above 90 percent for over a decade,” and that “less than 1% of toddlers had received no vaccines at all,” but also warning of the “the existence of local communities in which vaccination coverage is lower than target levels for certain diseases.”

This communication was patterned on the annual CDC press releases that announce the latest NIS results.  Relative to the Crisis, Anti-science, and control group subjects, those in the CDC condition formed much more accurate estimates of the high existing and historical rates of vaccination in the U.S.

Obviously, the high existing rates of vaccination in the U.S. suggest that the degree to which the public currently underestimates vaccination rates is not now inducing widespread noncompliance. But because reciprocity dynamics have been shown to be robust across collective action settings, there is ample reason to discourage journalists, advocates, and health professionals from propagating the misimpression that there is a “growing wave of public resentment and fear” toward vaccines that has resulted in “ ‘erosion in immunization rates’ ”—a refrain, ironically, that strident vaccine opponents gleefully embrace.

Indeed, public awareness that the U.S. has historically enjoyed and continues to enjoy exceptionally high rates of compliance with universal vaccination programs should be regarded as an important public-health resource.

The best available evidence on science and risk communication implies that “public health campaigns that describe the already wide acceptance of pertussis vaccination” and other immunizations against childhood disease is the most reliable way to sustain that widespread acceptance (Hershey et al. 1994, p. 186).

References

Cultural Cognition Project, Vaccine risk perceptions and ad hoc risk communication: An experimental investigation. Cultural Cognition Risk Perception Studies Rep. No. 17 (Jan. 27, 2014).

CDC. CDC National Survey Finds Early Childhood Immunization Rates Increasing. (Sept. 1, 2011), available at http://www.cdc.gov/media/releases/2011/p0901_cdc_nationalsurvey.html.

CDC. CDC Survey Finds Childhood Immunization Rates Remain High. (Sept. 16, 2010), available at http://www.cdc.gov/media/pressrel/2007/r070830.htm.

CDC. National, State, and Local Area Vaccination Coverage Among Children Aged 19–     `35 Months — United States, 2012. Morbidity and Mortality Weekly Reports 62, 733-740 (2013a).

CDC. National, State, and Local Area Vaccination Coverage Among Children Aged 19–35 Months — United States, 2007. Morbidity and Mortality Weekly Reports 57, 961-966 (2008).

CDC. National, State, and Local Area Vaccination Coverage Among Children Aged 19–35 Months — United States, 2006. Morbidity and Mortality Weekly Reports 56, 880-885 (2006).  

CDC. Pertussis Frequently Asked Questions (Dec. 9, 2013b). Available at http://www.cdc.gov/pertussis/about/faqs.html.

Cherry, J.D. Epidemic Pertussis in 2012 — the Resurgence of a Vaccine-Preventable Disease. New England Journal of Medicine 367, 785-787 (2012).

Gintis, H., Bowles, S., Boyd, R.T. & Fehr, E. eds. Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life (MIT Press, Cambridge, Mass., 2004).

Glanz, J.M., et al. A Population-Based Cohort Study of Undervaccination in 8 Managed Care Organiza-tions across the United States Managed Care Organizations. JAMA pediatrics 167, 274-281 (2013).

Hershey, J.C., Asch, D.A., Thumasathit, T., Meszaros, J. & Waters, V.V. The Roles of Altruism, Free Riding, and Bandwagoning in Vaccination Decisions. Organ Behav Hum Dec 59, 177-187 (1994)

Kahan, D.M. The Logic of Reciprocity. in Moral Sentiments and Material Interests: The Foundation of Cooperation in Economic Life (ed. H. Gintis, S. Bowler & E. Fehr) 339-378 (MIT Univ. Press, Cambridge, MA, 2004).

Mnookin, S. The Panic Virus : A True Story of Medicine, Science, and Fear (Simon & Schuster, New York, 2011).

Olson, M., Heckelman, J.C. & Coates, D. Collective choice : Essays in honor of mancur olson (Springer, Berlin ; New York, 2003).

Omer, S.B., et al. Geographic Clustering of Nonmedical Exemptions to School Immunization Requirements and Associations with Geographic Clustering of Pertussis. American Journal of Epidemiology 168, 1389-1396 (2008).

Opel, D.J., et al. The Relationship between Parent Attitudes About Childhood Vaccines Survey Scores and Future Child Immunization Status: A Validation Study. JAMA pediatrics 167, 1065-1071 (2013).

Opel, D.J., et al. Validity and reliability of a survey to identify vaccine-hesitant parents. Vaccine 29, 6598-6605 (2011b). 

Monday
Jan272014

Who fears childhood vaccines and why? Research report & project

Just posted new report, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment. It presents the results of a  large (N = 2300) national study of the public's perception of the risks and benefits of childhood vaccines. The study also includes an experimental component that examines how those perceptions are influenced by "ad hoc" risk communication --  information from popular sources that feature empirically uninformed claims about the extent, nature, and consequences of public concern about vaccine risks (there's very little concern to speak of, and views do not vary meaningfully across political or cultural groups).

The Report is part of a new CCP project on "Protecting the Vaccine Science Communication Environment." The project has its own page, which explains the project mission and links to various content.

I'll likely be featuring bits & pieces of the Report in the blog over the next  couple weeks.  I'm eager not merely to alert potentially interested readers that it is available but also to solicit comments, questions, and proposals for additional analyses.  Indeed, I anticipate issuing "updates" to the Report based on such feedback.

Here is the Report "Executive Summary":

Executive Summary

This Report presents empirical evidence relevant to assessing the claim—reported widely in the media and other sources—that the public is growing increasingly anxious about the safety of childhood vaccinations. The Report presents two principal findings: first, that vaccine risks are neither a matter of concern for the vast majority of the public nor an issue of contention among recognizable subgroups; and second, that ad hoc forms of risk communication that assert there is mounting resistance to childhood immunizations themselves pose a risk of creating misimpressions and arousing sensibilities that could culturally polarize the public and diminish motivation to cooperate with universal vaccination programs.

The basis for these findings was a study of a demographically diverse sample of 2,300 U.S. adults. In a survey component administered to a nationally representative 800-person subsample, the study found a high degree of consensus that vaccine risks are low and their benefits high. These perceptions, the data suggest, reflect the influence of a pervasively positive and widely shared affective orientation toward vaccines. This same affective orientation is reflected in widespread support for universal immunization and expressions of trust in the judgment of public health officials and professionals.

There was a modest minority of respondents who held a negative orientation toward vaccines. These respondents, however, could not be characterized as belonging to any recognizable subgroup identified by demographic characteristics, religiosity, science comprehension, or political or cultural outlooks. Indeed, groups bitterly divided over other science issues, including climate change and human evolution, all saw vaccine risks as low and vaccine benefits as high. Even within those groups, in other words, individuals hostile to childhood vaccinations are outliers.

In an experimental component administered to the entire sample, the study examined the impact of media and other reports that warn of escalating public concern over vaccine safety. Such information induced study participants to substantially underestimate vaccination rates and to substantially overestimate the proportion of parents invoking “exemptions” to universal immunization policies. This result is troubling because existing research shows that the motivation to contribute to collective goods, such as the herd immunity conferred by mass vaccination, declines when members of the public perceive that others are refusing to contribute. In contrast, exposure to a communication patterned on a typical CDC press statement induced subjects to form estimates much closer to actual U.S. vaccine rates (90% or above for over a decade) and of the proportion of children receiving no vaccinations (1%).

The experiment also examined the effect of information patterned on popular sources that link the belief that vaccines cause autism to disbelief in evolution and climate change. Among study subjects exposed to this information, perceptions of vaccine risks showed signs of dividing along the same cultural lines that inform disputes over highly contested societal issues, including the dangers of climate change, the consequences of drug legalization, and the impact of educating high school students about birth control. This result is also troubling: group-based conflicts are known to create strong psychological pressures that interfere with the normally reliable capacity that members of the public use to recognize valid decision-relevant science. This very dynamic is thought to have affected acceptance of the HPV vaccine.

Based on these findings the Report offers a series of recommendations. The most important is that the public health establishment play a more active leadership role in risk communication. Governmental agencies and professional groups should (1) promote the use of valid and appropriately focused empirical methods for investigating vaccine-risk perceptions and formulating responsive risk communication strategies; (2) discourage ad hoc risk communication based on impressionistic or psychometrically invalid alternatives to these methods; (3) publicize the persistently high rates of childhood vaccination and high levels of public support for universal immunization in the U.S.; and (4) correct ad hoc communicators who misrepresent vaccination coverage and its relationship to the incidence of childhood diseases.

.

 

Tuesday
Jan212014

MAPKIA! Episode 31 "Answer": culturally programmed risk predispositions alert to "fracking" but say "enh" (pretty much) to GM foods

Okay!  "Tomorrow" has arrived, which means it's time to real the "answer" "yesterday's" "MAPKIA!" episode.

As you no doubt recall, the question was ...

(i) What is the relationship between environmental-risk predispositions, as measured by ENVRISK_SCALE, and perceptions of GM food risks and fracking, respectively? And (ii), how, if at all, does science comprehension, as measured by SCICOMP, affect the relationship between people's environmental-risk predispositions and their perceptions of the dangers posed by GM food and fracking, respectively?

What made this an interesting question, I thought, was that both "fracking" and GM foods are novel risk sources.

If you read this blog ... Hmmm...

I was going to say if you read this blog this might surprise you, because in that case you have a weridly off-the-scale degree of interest in political debates over environmental risks and thus are grossly over-exposed to people discussing and arguing about fracking and GM food risks and what "the public" thinks about the same.  

But if you do regularly read this blog, then you, unlike most of the other weird people who fit that description, actually know that most Americans haven't heard of fracking and aren't too sure what GM foods are either.

Indeed, if you regularly read this blog (why do you? weird!), then you know that the claim "GM foods are to liberals what climate change is to conservatives!!" is an internet meme with no genuine empirical substance.  I've reported data multiple times showing that GM foods do not meaningfully divide ordinary members of the public along partisan or cultural lines.  The idea that they do is not a fact but a "rule" that one must accept to play a parlor game (one much less interesting than "MAPKIA!") that consists in coming up with just-so explanations for non-existent trends in public opinion.

But I thought, hey, let's give the claim that GM foods are politically polarized etc. as sympathetic a trial as possible. Let's take a look after turning up the resolution of our "cultural risk predisposition" microscope and see if there's anything going on. 

To make what I mean by that a bit clearer, let's step back and talk about different ways to measure latent risk predispositions.

"Cultural cognition" is one framework a person genuinely interested in facts about risk perceptions can use to operationalize the hypothesis that motivated reasoning shapes individuals' perceptions of culturally or politically contested risks.

What's distinctive about cultural cognition -- or at least most distinctive about it -- is how it specifies the latent motivating disposition.  Building on Douglas and Wildvasky's "cultural theory of risk," the cultural cognition framework posits that individuals will assess evidence (all kinds, from the inferences they draw from empirical data to the impressions they form with their own senses) in a manner that reinforces their connection to affinity groups, whose members share values or cultural worldviews that can be characterized along two dimensions--"hierarchy-egalitarianism" and "individualism-communitarianism."  Attitudinal scales, consisting of individual survey items, are used to measure the unobservable or latent risk predispositions that "motivate" this style of assessing information.

But there are other ways to operationalize the "motivated reasoning" explanation for conflict over risk.  E.g., one could treat conventional left-right political outlooks as the "motivator," and measure the predispositions that they generate with valid indicators, such as party identification and self-reported liberal-conservative ideology.

Do that, and in my view you aren't offering a different explanation for public controversy over risk and like facts. Rather you are just applying a different measurement scheme.

And for the most part, that scheme is inferior to the one associated with cultural cognition. By that, I mean (others might have other criteria for assessment, but to me these are the only ones that are worth any thoughtful person's time) that the cultural worldview measures of latent risk predispositions have more utility in explainining, predicting, and fashioning prescriptions than does any founded on left-right ideology.

I've illustrated this before by showing how much left-right measures understate the degree of cultural polarization that exists among ordinary, relatively nonpartisan members of the public (the vast number who are watching America's Funniest Pet Videos when tiny audiencies tune in to either Madow or O'Reily) on certain issues, including climate change.

Cultural worldviews are more discerning if one is trying to measure the unobserved or latent group affinities at work in this setting. 

But certainly it should be possible to come up with even more discerning measures still. In fact, in between blog posts, that's all I spend my time on (that and listening to Freddie Mecury albums).

In a previous blog post, I referred to an alternative measurement strategy that I identified with Leiserowtiz's notion of "interpretive communities."  In this approach, one measures the latent, shared risk predisposition of the different affinity groups' members by assessing their risk perceptions directly.  The risk perceptions are the indicators for the scale one forms to explore variance and test hypotheses about its sources and impact.

I formed a set of "interpretive community" measures by running factor analysis on a battery of risk perceptions assessed with the "industrial strength" measure.  The analysis identified two orthogonal latent "factors," which, based on their respective indicators, I labeled the "public safety" and "social-deviancy" risk predispositions.

How useful is this strategy for explaining, predicting, and forming prescriptions relating to contested risk?

The answer is "not at all" if one is interested in explaining etc. any of the risk perceptions that are the indicators of the "interpretive community" scale.  If one goes about things that way, then the explanans -- the interpregtive community (IC) scale-- has been analytically derived from the explanandum-- i.e., the risk one is trying to explain. This approach is obviously circular, and can yield no meaningful insight.

But if one is trying to make sense of perceptions of a novel or in any case not yet well understood risk perception, then a latent-measurement strategy like the IC one could well be quite helpful.

In that case, because the risk perception that one is interested in examining is not an indicator of the IC scale, there won't be the circularity that I just described.

In addition, the IC risk measure is likely to be more discerning with respect to that risk than the cultural cognition worldview scales.  

That's because individual risk perceptions are necessarily even more proximate, measurementwise, to the latent risk-perception predisposition that generates them than are latent-variable indicators relating to values and other individual characteristics.

Accordingly, if we think the relationship between a motivating predisposition and a risk perception might be weak -- or if we just aren't sure what the relationship might be -- then it might be quite sensible to use an IC method to measure the predisposition.

The inferences we'll be able to draw about why any relationship exists will be less suggestive of the operative social and psychological influences than ones we could have drawn if we measured the predisposition with indicators more remote ("distal") from individual risk perceptions. But if a valid IC scale picks up a relationshp that is too weak to have registered otherwise, then we'll know at least a bit more than we would have.  And if nothing shows up, we can be even more confident that the risk perception in question just isn't one that originates in the sort of dynamics that generate cultural cognition & like forms of motivated reasoning. . . . 

So I thought I'd try an IC apparoach for genetically modified foods rather than just repeat for the billionth time that there isn't any reason for characterizing them as a source of meaningful public conflict, much less one that pits "anti-science scared liberals" against conservatives. 

I formed a simple aggregate Likert scale by normalizing the sum of the (normalized) scores on responses to the industrial-strength risk perception measure as applied to global warming, nuclear power, toxic waste disposal, and air pollution.  I confirmed not only that the resulting scale was highly reliable (Cronbach's α = 0.82) but also that it generated a sharp division among individuals whose cultural outlooks-- "egalitarian communitarian" and "hierarch individualist," respectively--tend to divide over environmental and technological risks.

I confirmed too that the degree of cultural division associated with these risks increases as people with these outlooks score higher on a science-comprehension measure -- as one would expect if cultural cognition is motivating individuals to use their critical reasoning abilities to form identity-congruent rather than truth-congruent beliefs.

That gives me confidence that ENVRISK_SCALE, the aggregate Likert measure, supplies the high-resolution instrument I was after to examine GM food risk perceptions, and fracking ones, too, just for fun.

To appreciate how cool what one can see with ENVRISK_SCALE is, consider first the blurry, boring view one gets with a right-left political-outlook scale, which as I indicated supplies only a low-resolution measurement of the relevant motivating dispositions.

These scatterplots array members of the 1800-or-so-member, nationally representative sample with respect to their right-left political outlooks, measured with a composite scale formed by aggregating their responses to a party-identification measure and to a liberal-conservative ideology measure, and their perceptions of global warming, fracking, and GM food risks, all of which are assessed with the industrial-strength measure.

The visible diagonal pattern formed by the observations, which are colored "warm," red & orange for "high" risk concern" and "cold" green/blue for "low," shows that there is a strong right-left political influence on climate-change risk perceptions.

By the same token, the absence of much of a diagonal pattern for GM food risk perceptions illustrates how trivially political outlooks influence them.

To quantify this, I plotted regression lines, and also reported the R^2's, which reflect the "percentage of variance" in the respective risk perceptions (models here) "explained" by the right-left political outlook measure.  In the case of global warming, left-right outlooks explain an "impressively large!" 42% of the variance.  For GM food risks, political outlooks explain a humiliatingly small 2%.... But hey, don't let facts get in the way if you want to keep "explaining" why liberals are so worried about GM food risks!

Now, interestingly, right-left political outlooks explain 30% of the variance in fracking risk perceptions.  That's also "impressively large!"  Seriously, it is, because as I said, most members of the public don't know much if anything about fracking; I suspect at least 50% had never heard of it before the study!

I could turn up the resolution with cultural outlook measures but I've done that a bunch of times in the past and not seen anything interesting on GM foods.

So now let's zoom in with the even higher-resolution ENVRISK_SCALE.

Here I've just plotted fitted regression lines for the sample as a whole, and lowess ones for those subjects in the bottom 50% & top 10% on the "science comprehension" scale. I've left out global warming, for as I indicated, it makes zero sense to use an attitudinal scale to explain variance in one of its indicators.

Clearly, ENVRISK_SCALE is more discerning than are right-left political outlooks.  The R^2s have gone up a lot!

Indeed, at this point, I'm willing to accept that something at least slightly interesting seems to be going on with GM foods.  There are no "hard and fast" rules in assessing when an R^2 is "impressively large!" (I think the main value of R^2 is in comparing the relative fit or explanatory power of models, in fact).  But my practical sense is that the "action" that ENVRISK_SCALE is indeed meaningful, and suggestive of at least a weak predisposition among individuals, mainly egalitarian communitarians, who are on the "risk concerned" side of issues like climate change and nuclear power to worry.

The impact of science comprehension is also quite revealing, however, and cuts the other way!

As one would (or ought to) expect for risk perceptions that genuinely trigger motivated reasoning, science comprehension magnifies the polarizing effect of the disposition measured by ENVRISK_SCALE for fracking.

But it doesn't for GM foods.  Science comprehension predicts less risk concern, but it does so pretty uniformly across the range of the disposition measured by ENVRISK_SCALE.  

That suggests positions on GM foods aren't particularly important to anyone's identity.  If they were, then we'd expect the most science-comprehending members of competing groups to be picking up the scent of incipient conflict & assuming their usual vanguard role.

So on balance, I'm a little more open to the idea that GM foods could be a source of meaningful societal conflict--but only a tiny bit more.  More importantly, I'm less sure of what I believed than before & anticipate that someone or something might well surprise me here -- that would be great.

I'm really excited, though, about fracking!

Fracking already seems to warrant being viewed as a matter of cultural dispute despite its relatively novelty.  There's something about it that jolts individuals into assimilating their impressions of it to the ones they have on cluster of very familiar contested risks (climate, nuclear, air pollution, chemical wastes) that are the focus of the ENVRISK_SCALE.  That the most science-comprehending individuals are even more polarized on fracking suggests that the future for fracking might well look a lot like that for climate change.

As I adverted to last time, it's possible -- likely even-- that the wording of the fracking item, by referring to to "natural gas" being "extracted" from the earth, helped to cue relatively unfamiliar or even completely unfamiliar respondents as to what position to form.  But I think the settings in which people are likely to encounter information about fracking are likely to be comparably rich in such cues.

So watch out fracking industry!  And everyone else, for that matter.

Well, who won the game this particular "MAPKIA!" contest?

I'm going to have to say no one.

There were literally thousands of entries, most sent in via postcards from around the globe.

But for the most part, people just assumed that GM food risks perceptions would behave like the other risk perceptions measured by the ENVRISK_SCALE, both in the nature and extent of variance and their interaction with science comprehension.

Given the hundreds of thousands of Macanese children who never miss a "MAPKIA!" episode and who undertandably view its players as role models, I can't in good conscience declare anyone the winner under these circumstances! 

As I've emphasized -- zillioins of times -- cultural polarization on risks is the exception and not the rule. Ignoring the denominator-- as commentators sadly do all too often -- makes cogent explanations of this dynamic impossible

No problem whatsoever, of course, to predict a polarized future for GM food risks. But we're not there yet, and any interesting prediction of why that's where we'll end up would have to reflect a decent theoretical account of why GM foods will emerge as one of the lucky few risk sources that get to travel down the polarization path when so many don't.

Feel free to file your appeals, however, in the comments section!

Friday
Jan172014

MAPKIA! Episode 31: what is the relationship between "environmental risk perception" predispositions, science comprehension & perceptions of the risks of (a) fracking & (b) GM foods?!

Example MAPKIA winner's prize (actual prize may differ)Okay everybody!

Time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

By now all 14 billion regular readers of this blog can recite the rules of "MAPKIA!" by heart, but here they are for the 16,022 new 2014 subscribers:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will be posted "tomorrow."  The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.)  

The motivation for this week's show came from a twitter exchange between super-insightful psychologist Daniel Gilbert & others on whether "liberals" are "anti-science" on GM Foods.

Kind of ruins the "motivated-reasoning mirror on the wall, who is the most anti-science of all?!" game, but I can't help resorting to data whenever I catch an episode of that particular show.

In this case, however, the data surprised me! (Shit--weird things tend to happen when I say I am surprised by my data.... Oh well, too late.)

So I figured I'd give others a chance to play "MAPKIA!"" & see if they, unlike me, could accurately foresee what the data would say.

There's some background/windup here, so bear with me!

c'mon ... click me!(1) Let's start by constructing a simple scale for measuring "environmental risk perception" predispositions generally.  Members of an N = 2000 nationally representative sample of individuals recruited last summer to take part in CCP studies responded to a battery of "industrial grade" risk perception items, including ones on global warming, air pollution, nuclear power, and disposal of toxic chemical wastes.  The responses to those particular items formed a highly reliable (Cronbach's α = 0.82) aggregate Likert scale, which I labeled ... "ENVRISK_SCALE."

(2) ENVRISK_SCALE can be viewed as measuring a latent or unobserved predispostion toward culturally polarizing environmental risks.  That was my goal in forming it.

Just to confirm that I was measuring what I thought I was measuring, I regressed ENVRISK_SCALE on the "hierarchy-egalitarian" and "individualist-communitarian" worldview scales.  As expected, both scales were negatively associated with ENVRISK_SCALE -- i.e., Egalitarian Communitarians were risk sensitive, and Hierarch Individualists risk dismissive. The model R^2 was an "impressively large!" 0.43.

Moreover, as every school -boy or -girl in Macau would have predicted, these effects interact with science comprehension, an aptitude measured with SCICOMP, a composite formed from the NSF's "science literacy" indicators & a long version of Frederick's "cognitive reflection test. That is, consistent with the signature of "expressive rationality," the polarizing effect of the cultural worldviews grow even more intense as subjects' science comprehension scores increase.

Take a look!

Okay! We are almost ready for the "MAPKIA!" question.  

In addition to the global warming, nuclear power, air pollution, and toxic waste disposal items, the survey instrument also had "industrial grade" measures for both fracking & GM foods. That is, the respondents were asked to indicate "how much risk do you believe" each of those two "pose[] to human health, safety, or prosperity" on a 7-point scale (0 “no risk at all”; 1 “Very low risk”;  2 “Low risk”; 3 “Between low and moderate risk”; 4 “Moderate risk”; 5 “Between moderate and high risk”; 6 “High risk”; 7 “Very high risk”).

I suspected that at least half of the subjects would have no idea what "fracking" was -- after all, like 50% of the rest of the country, 50% of the respondents didn't know the length of the term of a U.S. Senator.

So when respondents got to this particular entry on the randomly ordered (separate page each) list of two dozen or so putative risk sources, they were asked to indicate the seriousness of the risk posed by " 'fracking'  (extraction of natural gas by hydraulic fracturing)."

I didn't use any analogous hints for GM foods.  Respondents were simply instructed to indicate how serious they thought the risks posed by "genetically modified food" were.

But in fact, GM foods are also a fairly novel risk source. Whether they threaten human health is another issue that most ordinary members of the public have given little if any thought to.


Because both "fracking" & GM food risks aren't nearly so salient -- aren't nearly so entangled in relentless, high-profile forms of cultural conflict-- as global warming, nuclear power, air pollution, or even toxic waste disposal, it would be surprising if cultural worldviews explained a lot of variance in individuals' perceptions of how dangerous they are.

If we really want to give these risk perceptions a "fair chance" to show that they are responsive to the gravitational force of cultural contestation, then we need to turn up the resolution of of our measuring instrument to compensate for the remoteness of fracking and GM foods from the center of everyday tribal rivalry.

ENVRISK_SCALE fits the bill. The risk perception items that are its indicators are necessarily even more proximate to whatever the unobserved or latent group affinity is generating the cultural cognition of risk than are the cultural worldview measures.  Why not be really generous, I thought in my own know-it-all way as I reflected on the DG twitter colloquy, & use a culturally infused environmental risk perception measure to show what the evidence really has to say about who fears GM foods & why? 

So now the question, which has two subparts:

(i) What is the relationship between environmental-risk predispositions, as measured by ENVRISK_SCALE, and perceptions of GM food risks and fracking, respectively? And (ii), how, if at all, does respondents' level of science comprehension, as measured by SCICOMP, affect the relationship between their environmental-risk predispositions and their perceptions of the dangers posed by GM food and fracking, respectively?

Ready ... get set ..."MAPKIA!" 

Thursday
Jan162014

Secular cultural trends punctuated by noisy, emotional peaks & valleys: surveying the psychology landscape of mass opinion, mass shootings, & gun control

Really cool new working paper by Josh Blackman & Shelby Baird on the psychology of mass public opinion on guns.  

Based on a disciplined synthesis of decades of survey data in relation to mass shooting events, plus a textured case study of popular reactions to the Newtown shooting, B&B construct an interesting & plausible model of the psychological dynamics that shape popular support for gun control.

The key pieces consist of [1] an aggregate societal demand for gun restrictions, which comprises a vectoring (essentially) of culturally grounded predispositions; [2] a collection of risk-perception heuristics that, interacting with cultural predispositions, regulate popular attention and reaction to information on gun risks and the efficacy of gun regulation; and [3] sporadic mass shooting events that, feeding on [2], ignite a conflagration of political activity that cools and abates in a recurring, predictable pattern ("the shooting cycle"), leaving no net effect on [1].

The political-economy take home is that gun control supporters can't expect to buy much with the currency of popular opinion. As a result of [2], we can expect the drama of gun control to remain stubbornly anchored to the center of the popular-political stage.  But once [1] and [3] are disentangled, B&B conclude, it becomes clear that the popular demand for gun control is relatively weak and growing progressively weaker over time, notwithstanding the predictably intense but temporary spikes generated by mass shootings.

Because of the psychology of gun risks, the prospect of scoring a decisive victory will thus continue to tantalize gun control supporters, who will respond with convulsive enthusiasm to the "opportunities" episodically furnished by mass shooting tragedies.  But according to B&B, they won't get anywhere unless there is "a significant cultural shift" on guns--one the dimensions of which are significant enough to alter [1].  

Indeed, B&B view the prospects of that sort of development as constrained by [2] as well. Advocacy groups will predictably employ culturally partisan and divisive idioms to milk support from the members of groups that are culturally predisposed to see gun risks as high, thereby reinforcing the political motivation of opposing groups to resist gun regulation as an assault on their identities.

There are lots of things to like about this paper.

One is the interesting and compelling explanatory framework B&B construct.  Even if one isn't sure it is right-- or even strongly suspects it is wrong!--engaging with it is a great way to structure one's collection and assessment of evidence that can be used to advance understanding of gun control politics.  In addition, even if one isn't interested in gun control, one can profitably adapt the framework to other "risk" issues, like, say, climate change, where advocacy seems similarly disoriented by the allure of popular-opinion fool's gold.

Another is the solid style of analysis.  B&B didn't conduct an original observational study or conduct an experiment. But they did use valid empirical methods.  That is, they formulated a set of conjectures, identified sources of evidence that could be expected to support an inference as to whether the conjectures were likely true or not, and then collected the evidence and assessed it in a disciplined and transparent manner that admits of engagement by critically reasoning readers.

Contrast this with the "just-add-water-&-stir, instant decision science" that abounds in both popular and academic commentary.  That style of analysis, which aims to mesmerize credulous readers into thinking that their preconceptions are "scientifically supported," is a counterfeit species of empiricism.

To be sure, the sort of "synthetic empirical" analysis that B&B have performed is open to criticism, particularly given the flexibility those who engage in it have to identify confirming and disconfirming forms of secondary evidence.

But no form of valid empirical analysis is free of doubt.  

A smart person will be willing to accept guidance from any valid form of empirical inquiry--that is, from any that is susceptible of generating more or less reason to believe a proposition than one would otherwise have. Rather than wasting time arguing about "which valid empirical method is best," that person will welcome all forms, the results of which that individual will combine in forming his or her views.

The "gold standard" is the "no gold standard" philosophy of convergent validity.

The final thing to like about this paper: cool graphs!

 

 

Page 1 ... 4 5 6 7 8 ... 25 Next 20 Entries »