Key Insight

Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science.  The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan ... Read more

Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science.  The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan Lohmann, Mark Kleiman, and others.

I believe they were filming and will upload a video of the event. If that happens, I’ll post the link. For now, here’s a summary (to best of my recollection) & slides.

1. The science communication problem & the cultural cognition thesis

I am going to offer a synthesis of a body of research findings generated over the course of a decade of collaborative research on public risk perceptions.

The motivation behind this research has been to understand the science communication problem . The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control.

In addition to being annoying (in particular, to scientists—who feel frustratingly ignored—but also to anyone who believes self-government and enlightened policymaking are compatible), the science communication problem is also quite peculiar. The factual questions involved are complex and technical, so maybe it should not surprise us that people disagree about them. But the beliefs about them are not randomly distributed. Rather they seem to come in familiar bundles (“earth not heating up . . . ‘concealed carry’ laws reduce crime”; “nuclear power dangerous . . . death penalty doesn’t deter murder”) that in turn are associated with the co-occurrence of various individual characteristics, including gender, race, region of residence and, ideology (but not really so much by income or education), that we identify with discrete cultural styles.

The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk.

A Bayesian model of information processing can be used heuristically to make sense of the distinctive features of any proposed cognitive mechanism. In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative).

A person experiences confirmation bias when she selectively searches out and credits new information conditional on its agreement with her existing beliefs. In effect, she is not updating her prior beliefs based on the weight of the new evidence; she is using her prior beliefs to determine what weight the new evidence should be assigned. Because of this endogeneity between priors and likelihood ratio, she will fail to correct a mistaken belief or fail to correct as quickly as she should despite the availability of evidence that conflicts with that belief.

The cultural cognition model posits that individuals have “cultural predispositions”—that is some tendency, shared with others who hold like group commitments, to find some risk claims more congenial than others. In relation to the Bayesian model, we can see cultural predispositions as the source of individuals’ priors. But cultural dispositions also shape information processing: people more readily search out (or are more likely to be exposed to) evidence congenial to their cultural predispositions than evidence noncongenial to them; they also selectively credit or discredit evidence conditional on its congeniality to their cultural predispositions.

Under this model, we will often see what looks like confirmation bias because the same thing that is causing individuals priors—cultural predispositions—is shaping their search for and evaluation of new evidence. But in fact, the correlation between priors and likelihood ration in this model is spurious.

The more consequential distinction between cultural cognition and confirmation bias is that with the latter people will not only be stubborn but disagreeable. People’s cultural predispositions are heterogeneous. As a result, people with different values with start with different priors, and thereafter engage in opposing forms of biased search for confirming evidence, and selectively credit and discredit evidence in opposing patterns reflective of their respective cultural commitments.

If this is how people behave, we will see the peculiar pattern of group conflict associated with the “science communication problem.”

3. Nanotechnology: culturally biased search & assimilation

CCP tested this model by studying the formation of nanotechnology risk perceptions . In the study, we found that individuals exposed to information on nanotechnology polarized relative to uninformed subjec Do it! Do it! ts along lines that reflected the environmental and technological risks associated with their cultural groups. We also found that the observed association between “familiarity” with nanotechnology and the perception that its benefits outweigh its risks was spurious: both the disposition to learn about nanotechnology before the study and the disposition to react favorably to information were caused by the (pro-technology) individualistic worldview.

This result fits the cultural cognition model. Cultural predispositions toward environmental and technological risks predicted how likely subjects of different outlooks were to search out information on a novel technology and the differential weight  (the “likelihood ratio,” in Bayesian terms) they’d give to information conditional on being exposed to it.

a. In one study, CCP found that cultural cognition shapes perceptions of scientific consensus. Experiment subjects were more likely to recognize a university trained scientist as an “expert” whose views were entitled to weight—on climate change, nuclear power, and gun control—if the scientist was depicted as holding the position that was predominant in the subjects’ cultural group. In effect, subjects were selectively crediting or discrediting (or modifying the likelihood ratio assgined to) evidence of what “expert scientists” believe on this topics in a Whoa! manner congenial to their cultural outlooks. If this is how they react in the real world to evidence of what scientists believe, we should expect them to be culturally polarized on what scientific consensus is.  And they are, we found in an observational component of the study.  These results also cast doubt on the claim that the science communication problem reflects the unwillingness of one group to abide by scientific consensus, as well as any suggestion that one group is better than another at perceive what scientific consensus is on polarized issues.

b. In another study, CCP found that science comprehension magnifies cultural polarization . This is contrary to the common view that conflict over climate change is a consequence of bounded rationality. The dynamics of cultural cognition operate across both heuristic-driven “System 1” processing, as well as reflective, “System 2” processing. (The result has also been corroborated experimentally.)

5.  The “tragedy of the science communications commons”

The science communication problem can be understood to involve a conflict between two levels of rationality. Because their personal behavior as consumers or voters is of no material consequence, idividuals don’t increase their own exposure to harm or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group.  This is collectively irrational , however, for if everyone forms his or her perception of risk in this way, democratic policymaking is less likely to converge on policies that reflect the best available evidence.