Three models of risk perception
The scholarly literature on risk perception and communication is dominated by two models. The first is the rational-weigher model, which posits that members of the public, in aggregate and over time, can be expected to process information about risk in a manner that promotes their expected utility (Starr 1969). The second is the irrational-weigher model, which asserts that ordinary members of the pubic lack the ability to reliably advance their expected utility because their assessment of risk information is constrained by cognitive biases and other manifestations of bounded rationality (Kahneman 2003; Sunstein 2005; Marx et al. 2007; Weber 2006).
Neither of these models cogently explains public conflict over climate change—or a host of other putative societal risks, such as nuclear power, the vaccination of teenage girls for HPV, and the removal of restrictions on carrying concealed handguns in public. Such disputes conspicuously feature partisan divisions over facts that admit of scientific investigation. Nothing in the rational-weigher model predicts that people with different values or opposing political commitments will draw radically different inferences from common information. Likewise, nothing in the irrational-weigher model suggests that people who subscribe to one set of values are any more or less bounded in their rationality than those who subscribe to any other, or that cognitive biases will produce systematic divisions of opinion of among such groups.
One explanation for such conflict is the cultural cognition thesis (CCT). CCT says that cultural values are cognitively prior to facts in public risk conflicts: as a result of a complex of interrelated psychological mechanisms, groups of individuals will credit and dismiss evidence of risk in patterns that reflect and reinforce their distinctive understandings of how society should be organized (Kahan, Braman, Cohen, Gastil & Slovic 2010; Jenkins-Smith & Herron 2009). Thus, persons with individualistic values can be expected to be relatively dismissive of environmental and technological risks, which if widely accepted would justify restricting commerce and industry, activities that people with such values hold in high regard. The same goes for individuals with hierarchical values, who see assertions of environmental risk as indictments of social elites. Individuals with egalitarian and communitarian values, in contrast, see commerce and industry as sources of unjust disparity and symbols of noxious self-seeking, and thus readily credit assertions that these activities are hazardous and therefore worthy of regulation (Douglass & Wildavsky 1982). Observational and experimental studies have linked these and comparable sets of outlooks to myriad risk controversies, including the one over climate change (Kahan 2012).
Individuals, on the CCT account, behave not as expected-utility weighers—rational or irrational—but rather as cultural evaluators of risk information (Kahan, Slovic, Braman & Gastil 2006). The beliefs any individual forms on societal risks like climate change—whether right or wrong—do not meaningfully affect his or her personal exposure to those risks. However, precisely because positions on those issues are commonly understood to cohere with allegiance to one or another cultural style, taking a position at odds with the dominant view in his or her cultural group is likely to compromise that individual’s relationship with others on whom that individual depends for emotional and material support. As individuals, citizens are thus likely to do better in their daily lives when they adopt toward putative hazards the stances that express their commitment to values that they share with others, irrespective of the fit between those beliefs and the actuarial magnitudes and probabilities of those risks.
The cultural evaluator model takes issue with the irrational-weigher assumption that popular conflict over risk stems from overreliance on heuristic forms of information processing (Lodge & Taber 2013; Sunstein 2006). Empirical evidence suggests that culturally diverse citizens are indeed reliably guided toward opposing stances by unconscious processing of cues, such as the emotional resonances of arguments and the apparent values of risk communicators (Kahan, Jenkins-Smith & Braman 2011; Jenkins-Smith & Herron 2009; Jenkins-Smith 2001).
But contrary to the picture painted by the irrational-weigher model, ordinary citizens who are equipped and disposed to appraise information in a reflective, analytic manner are not more likely to form beliefs consistent with the best available evidence on risk. Instead they often become even more culturally polarized because of the special capacity they have to search out and interpret evidence in patterns that sustain the convergence between their risk perceptions and their group identities (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).
Two channels of science communication
The rational- and irrational-weigher models of risk perception generate competing prescriptions for science communication. The former posits that individuals can be expected, eventually, to form empirically sound positions so long as they are furnished with sufficient and sufficiently accurate information (e.g., Viscusi 1983; Philipson & Posner 1993). The latter asserts that the attempts to educate the public about risk are at best futile, since the public lacks the knowledge and capacity to comprehend; at worst such efforts are self-defeating, since ordinary individuals are prone to overreact on the basis of fear and other affective influences on judgment. The better strategy is to steer risk policymaking away from democratically accountable actors to politically insulated experts and to “change the subject” when risk issues arise in public debate (Sunstein 2005, p. 125; see also Breyer 1993).
The cultural-evaluator model associated with CCT offers a more nuanced account. It recognizes that when empirical claims about societal risk become suffused with antagonistic cultural meanings, intensified efforts to disseminate sound information are unlikely to generate consensus and can even stimulate conflict.
But those instances are exceptional—indeed, pathological. There are vastly more risk issues—from the hazards of power lines to the side-effects of antibiotics to the tumor-stimulating consequences of cell phones—that avoid becoming broadly entangled with antagonistic cultural meanings. Using the same ability that they reliably employ to seek and follow expert medical treatment when they are ill or expert auto-mechanic service when their car breaks down, the vast majority of ordinary citizens can be counted on in these “normal,” non-pathological cases to discern and conform their beliefs to the best available scientific evidence (Keil 2010).
The cultural-evaluator model therefore counsels a two-channel strategy of science communication. Channel 1 is focused on information content and is informed by the best available understandings of how to convey empirically sound evidence, the basis and significance of which are readily accessible to ordinary citizens (e.g., Gigerenzer 2000; Spiegelhalter, Pearson & Short 2011). Channel 2 focuses on cultural meanings: the myriad cues—from group affinities and antipathies to positive and negative affective resonances to congenial or hostile narrative structures—that individuals unconsciously rely on to determine whether a particular stance toward a putative risk is consistent with their defining commitments. To be effective, science communication must successfully negotiate both channels. That is, in addition to furnishing individuals with valid and pertinent information about how the world works, it must avail itself of the cues necessary to assure individuals that assenting to that information will not estrange them from their communities (Kahan, Slovic, Braman & Gastil 2006; Nisbet 2009).
Jenkins-Smith, H. Modeling stigma: an empirical analysis of nuclear waste images of Nevada. in Risk, media, and stigma : Understanding public challenges to modern science and technology (ed. J. Flynn, P. Slovic & H. Kunreuther) 107-132 (Earthscan, London ; Sterling, VA, 2001).
Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).
Marx, S.M., Weber, E.U., Orlove, B.S., Leiserowitz, A., Krantz, D.H., Roncoli, C. & Phillips, J. Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ Chang 17, 47-58 (2007).
Hey check out the cool comments by Julie Leask & Gord Pennycook on the prospects for "debiasing." I agree the little excerpt here neglects this issue, in particular as a component of the "Irrational Weigher" model. I said something more in response to Julie & Gord but for sure not enough to do the issue justice. I said at least a bit more about the issue in Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013), too. I didn't say enough there, but the discussion was in anticipation of exactly the points Julie & Gordon are raising. Prodded and enlightened by their observations, I'll work out some more thoughts & post them in a separate entry in the "near future" -- maybe I can even entice/provoke one or both of them into responding.
But for now, here is the relevant discussion from IMRCR:
5.3 Implications for counteracting ideologically motivated reasoning
The goal of empirically investigating the sources of political conflict over risk and other policy-consequential facts is not merely to explain this phenomenon but also to aid in discovery of devices for mitigating it. The study described in this paper makes a contribution to that end as well.
It does this primarily by helping to inform hypotheses about how such dynamics might be combated. Many scholars have suggested “debiasing” strategies aimed at correcting the distorting effect of System 1 reasoning on public perceptions of risk (Lilienfeld et al., 2009; Jolls & Sunstein, 2006). Because such distortions are real— and substantially interfere with human wellbeing in myriad domains—pursuit of System 1 debiasing techniques is unquestionably important. Nevertheless, if, as the present study implies, ideologically motivated cognition is not a consequence of an over-reliance on heuristic reasoning, then System 1 debiasing strategies should not be expected to abate polarization over climate change, nuclear power, gun control, the HPV vaccine or like issues (Kahan, Slovic, Braman & Gastil, 2006).
What is needed instead are interventions that remove the expressive incentives individuals face to form perceptions of risk and related facts on grounds unconnected to the truth of such beliefs (Lessig, 1995). Extending the analysis of previous papers, this one has suggested that ideologically motivated reasoning is in fact expressively rational at the individual level, because it conveys individuals’ membership in and loyalty to groups on whom they depend for various forms of support, emotional, material, and otherwise (Hillman, 2010; Akerlof & Kranton, 2000).
This account, however, presupposes that beliefs on risks and related facts bear social meanings—that they are, in fact, generally understood (tacitly, at least) to convey that the individuals who espouse them are committed to one group rather than another (Cohen, 2003). Not all risks and policy-relevant facts have this quality; indeed, relatively few do, and on the vast run of ones that do not (e.g., that pasteurization removes infectious agents from milk; that fluoridation of water fights tooth decay; that privatization of the air-traffic control system is inimical to air safety), we do not observe significant degrees of ideological or cultural polarization.
There is little reason to believe, moreover, that the meanings of highly contested facts are insusceptible of revision in a manner that would disconnect particular positions on them from membership in identity-defining groups. One can understand the historical shift in public opinion toward the risks posed by cigarettes (including third-party ones from passive smoke exposure or from the lung cancer) as having been mediated by informational campaigns aimed at altering the positive meanings that dismissing evidence of the health hazards of smoking expressed in certain subcommunities (United States Public Health Service, 2000; Gusfield 1993).
The expressive account of ideological polarization, then, underscores the value of forming and testing hypotheses about how to regulate the social meaning of risks and related policy-relevant facts. Indeed, research focusing on forecasting techniques for identifying technologies vulnerable to polarizing meanings, on governmental processes for protecting the “science communication environment” from influences that cause such meanings to take hold, and on framing and other strategies for cleaning up that environment once it has been contaminated with polarizing meanings (Kahan, 2012b), is already well underway (Corner, Whitmarsh & Xenias, 2012; Druckman & Bolsen, 2012, 2011, Nisbet & Scheufele, 2009).