Key Insight

In one CCP study, we found that cultural polarization over climate change is magnified by science literacy (numeracy, too). That is, as culturally diverse members (but perfectly ordinary, and not particularly partisan) members of the public become more science literate, they don’t converge on the dangers that global warming poses but rather grow even more divided. Not ... Read more

In one CCP study , we found that cultural polarization over climate change is magnified by science literacy (numeracy, too). That is, as culturally diverse members (but perfectly ordinary, and not particularly partisan ) members of the public become more science literate, they don’t converge on the dangers that global warming poses but rather grow even more divided.

Not what you’d expect if you thought that the source of the climate change controversy was a deficit in the public’s ability to comprehend science.

But the culturally polarizing effect of science literacy isn’t actually that unusual. It’s definitely not the case that all risk issues generate cultural polarization. But among those that do, division is often most intense among members of the public who are the most knowledgeable about science in general.

Actually, in the paper in which we reported the culturally polarizing effect of science literacy with respect to perceptions of climate change risks, we also reported data that showed the same phenomenon occurring with respect to perceptions of nuclear power risks.

Well, here are some more data that help to illustrate the relationship between science literacy and cultural polarization.  They come from a survey of a nationally representative sample of 2000 persons conducted in May and June of this year (that’s right– even more fresh data! Mmmmmm mmmm !)

These figures illustrate how public perceptions of different risks vary in relation to science literacy. Risk perceptions were measured with the “industrial strength measure .” Science literacy was assessed with the National Science Foundation’s “Science Indicators,” a battery of questions commonly used to measure general factual and conceptual knowledge about science.

For each risk, I plotted (using a locally weighted regression smoother, a great device for conveying the profile of the raw data) the relationship between risk perception and science literacy for the sample as a whole (the dashed grey line) and the relationships between them for the cultural groups (whose members are identified based on their scores in relations to the means on the hierarchy-egalitarian and individualist-communitarian worldview scales) that are most polarized on the indicated risk

The upper-left panel essentially reproduces the pattern we observed and reported on in our Nature Climate Change study . Overall, science literacy has essentially impact on climate-change risk perceptions. But among egalitarian communitarians and hierarch individualists–the cultural groups who tend to agree most strongly on environmental and technological risks–science literacy has off-setting effects with respect to climate change and fracking: it makes egalitarian communitarians credit assertions of risk more, and hierarchical individualists less.

The same basic story applies to the bottom two panels. Those ones look at legalization of marijuana and legalization of prostitution, “social deviancy risks” of the sort that tend to divide hierarchical communitarians and egalitarian individualists.

Neither the level of concern nor the degree of cultural polarization is as intense as those associated with global warming and fracking. But the intensity of cultural disagreement does intensify with increasing science literacy (it seems to abate for legalization of prostitution among those highest in science litercy, although the appearance of convergence would have to be statistically interrogated before one could conclude that it is genuine).

What to make of this? Well, again, one interpretation –one supported by the study of cultural cognition generally–is that the source of cultural polarization over risk isn’t plausibly attributed to a deficit in the public’s knowledge or ability to comprehend science.

Instead, it’s caused by antagonistic cultural meanings that become attached to particular risks (and related facts), converting them into badges of membership in and loyalty to important affinity groups.

When that happens, the stake individuals have in maintaining their standing in their group will tend to dominate the stake they have in forming “accurate” understandings of the scientific evidence : mistakes on the latter won’t increase their or anyone else’s level of risk (ordinary individual’s opinions are not of sufficient consequence to incrase or diminish the effects of climate change, etc); whereas being out of line with one’s group can have huge, and hugely negative, consequences for people socially.

Ordinary individuals will thus attend to information about the risks in question (including, e.g., the position of “expert” scientists) in patterns that enable them to persist in the holding beliefs congruent with their cultural identities.  Individuals who enjoy a higher than average capacity to understand such information won’t be immune to this effect; on the contrary, they will use their higher levels of knowledge and analytic skills to ferret out identity-supportive bits of information and defend them from attack, and thus form perceptions of risk that are even more reliably aligned with those that are characteristic of their groups.

That was the argument we made about climate change and science comprehension in our Nature Nanotechnology stud y.  And I think it generalizes to other culturally contested risks.

But not all socieal risks are contested . The number that are characterized by culturally antagonistic meaning is, as I’ve stressed before, quite small in relation to the number that generate intense cleavages of the sort that characterize climate change, nuclear power, gun control, the HPV vaccine, and (apparently now) fracking.

With respect to those issues, we shouldn’t expect to see polarization generally. Nor should we expect to see it among those culturally diverse individuals who are highest in science literacy or in other qualities that reflect a higher capacity to comprehend quantitative information.

On the contrary, we should expect such individuals to be even more likely to be converging on the best scientific evidence.  They might be better able to understand such evidence themselves than people whose comprehension of science is more modest.

But more realistically, I’d say, the reason to expect more convergence among the most science literate, most numerate, and most cognitively reflective citizens is that they are more reliably able to discern who knows what about what.

The amount of decision-relevant science that it is valuable for citizens to make use of in their lives far exceeds the amount that they could hope to form a meaningful understanding of. Their ability to make use of such information, then, depends on the ability of people to recognize who knows what about what (even scientists need to be able to employ this form of perception and recognition for them to engage in collaborative production of knowledge within their fields).