Gave talk Wednesday at AGU meeting in San Francisco. Slides here. I was on panel w/ a bunch of talented scholars doing really great studies on teaching climate science. The substance of what to teach (primarily in context of undergraduate science courses) was quite interesting but what was really cool was the their (data-filled) account of the "test theory" issues they are attacking in developing valid, reliable, and highly discriminant measures of "climate science literacy" ("earth is heating up," "humans causing," "we're screwed," they recognized, don't reliably measure anything other than the attitude "I care/believe in global warming"). My talk wasn't on how to impart climate science literacy but rather on what needs to be done to assure that a democratic society gets the full value out of having civically science literate citizens: protect the science communication environment-- a matter that making citizens science literate does not itself achieve. (Gave another talk later at The Nature Conservacy's "All-Science" event but will have to report on that "tomorrow.") Here's what I more-or-less remember saying at AGU:
If this were the conversation I'm usually a part of, then I'd likely now be playing the role of heretic.
That discussion isn't about how to teach climate science to college students but rather about how to communicate climate risks to the public.
The climate-risk communication orthodoxy attributes public controversy over global warming to a deficit in the public's comprehension of science. The prescription, on this view, is to improve comprehension—either through better science education or through better public science communication.
I’ll call this the “civic science literacy” thesis (or CSL).
I’m basically going to stand CSL on its head.
Public controversy, I want to suggest, is not a consequence of a deficit in public science comprehsnion; it is a cause of it. Such controversy is a kind of toxin that disables the normally reliable faculties that ordinary citizens use to recognize valid decision-relevant science.
For that reason I'll call this position the “science communication environment” thesis (or SCE). The remedy SCE prescribes is to protect the science communication environment from this form of contamination and to repair it when such protective efforts fail.
This account is based, of course, on data—specifically a set of studies designed to examine the relationship between science comprehension and cultural cognition.
“Cultural cognition” refers to the tendency of people to conform their perceptions of risk to ones that predominate in important affinity groups—ones united by shared values, cultural or political. Cultural cognition has been shown to be an important source of cultural polarization over climate change and various other risks.
In a presentation I made here a couple of years ago, I discussed a study that examined the connection between cultural cognition and science literacy, as measured with the standard NSF Science Indictors battery.In it, we found that polarization measured with reference to cultural values, rather than abating as science literacy increases, grows more intense.
This isn’t what one would expect if one believed—as is perfectly plausible—that cultural cognition is a consequence of a deficit in science comprehension (the CSL position).
The result suggests instead an alternative hypothesis: that people are using their science comprehension capacity to reinforce their commitment to the positions on risk that predominate in their affinity groups, consistent with cultural cognition.
That hypothesis is one we have since explored in experiments. The experiments are designed to “catch” one or another dimension of science comprehension “in the act” of promoting group-convergent rather than truth- or science-convergent beliefs.
In one, we found evidence that “cognitive reflection”—the disposition to engage in “slow” conscious, analytical reasoning as opposed to “fast” intuitive, heuristic reasoning—has that effect.
But the study I want quickly to summarize for you now involves “numeracy” and cultural cognition. “Numeracy” refers not so much to the ability to do math but to the capacity and disposition to use quantitative information to draw valid causal inferences.
In the study, we instructed experiment subjects to analyze results from an experiment. Researchers tested the effectiveness of a skin rash cream to a “treatment” condition and a “control” condition. They recorded the results in both conditions. Our study subjects were then supposed to figure out whether treatment with the skin cream was more likely to make the patients’ rash “better” or “worse.”
This is a standard “covariance detection” problem. Most people get the wrong answer because they use a “confirmatory hypothesis” testing strategy: they note that more patients’ rash got better than worse in the treatment; also that more got better in the treatment than in the control; and conclude the cream makes the rash get better.
But this heuristic strategy ignores disconfirming evidence in the form of the ratio of positive to negative outcomes in the two conditions. Patients using the skin cream were three times more likely to get better than worse; but those using not using the skin cream were in fact five times more likely to get better. Using the skin cream makes it more likely that that the rash will get worse than not using it.
By manipulating the column headings in the contingency table, we varied whether the data, properly interpreted, supported one result or the other. As one might expect, subjects in both conditions scoring low in numeracy were highly likely to get the wrong answer on this problem, which has been validated as a predictor of this same kind of error in myriad real-world settings. Indeed, subjects weren’t likely to get the “right” answer only if they scored in about the 90th percentile on numeracy.
We assigned two other groups of subjects to conditions in which they were instructed to analyze the same experiment styled as one involving a gun control ban. We again manipulated the column headings.
You can see that the results in the “gun ban” conditions were are comparable to the ones in the skin-rash treatments. But obviously, it’s noisier.
The reason is cultural cognition. You can see that in the skin-rash conditions, the relationship between numeracy and getting the right answer was unaffected by right-left political outlooks.
But in the gun-ban conditions, high-numeracy subjects were likely to get the right answer only when the data, properly interpreted, supported the conclusion congenial to their political values.
These are the raw data. Here are simulations of the predicted probabilities that low- and high-numeracy would get the right answer in the various conditions. You can see that low-numeracy were partisans were very unlikely to get the right answer and and high-numeracy ones very likely to get it in the skin-rash conditoins—and partisan differences were trivial and nonsignificant.
In the gun-ban conditions, both low- and high-numeracy partisnas were likely to polarize. But the size of the discrepancy in the probability of getting the right answer was between low-numeracy subjects in each condition was much smaller than the size of the discrepancy for high-numeracy ones.
The reason is that the high-numeracy ones but not the low- were able correctly to see when the data supported the view that predominates in their ideological group. If the data properly interpreted did not support that position, however, the high-numeracy subjects used their reasoning capacity perversely—to spring open a confabulatory escape hatch that enabled them to escape the trap of logic.
This sort of effect, if it characterizes how people deal with evidence of a politically controversial empirical issue, will result in the sort of magnification of polarization conditoinal on science literacy that we saw in the climate-change risk perception study.
It should now be apparent why the CSL position is false, and why it’s prescription of improving science comprehension won’t dispel public conflict over decision-relevant science.
The problem reflected in this sort of pattern is not too little rationality, but too much. People are using their science-comprehension capacities opportunistically to fit their risk perceptions to the one that dominates in their group. As they become more science comprehending, then, the problem only gets aggravated.
But here is the critical point: this pattern is not normal.
The number of science issues on which there is cultural polarization, magnified by science comprehension, is tiny in relation to the number on which there isn’t.
People of diverse values don’t converge on the safety of medical x-rays, the danger of drinking raw milk, the harmlessness of cell-phone radiation etc because they comprehend the science but because they make reliable use of all the cues they have access to on what’s known to science.
Those cues include the views of those who share their outlooks & who are highly proficient in science comprehension. That's why partisans of even low- to medium-numeracy don't have really bad skin rashes!
This reliable method of discerning what’s known to science breaks down only in the unusual conditions in which positions on some risk issue—like whether the earth is heating up, or whether concealed carry laws increase or decrease violent crime—become recognizable symbols of identity in competing cultural groups.
When that happens, the stake that people have in forming group-congruent views will dominate the stake they have in forming science-congruent ones. One’s risk from climate change isn’t affected by what one believes about climate change because one’s personal views and behavior won’t make a difference. But make a mistake about the position that marks one out as a loyal member of an important affinity group, and one can end up shunned and ostracized.
One doesn’t have to be a rocket scientist to form and persist in group-congruent views, but if one understands science and is good at scientific reasoning, one can do an even better job at it.
The meanings that make positions on a science-related issue a marker of identity are pollution in the science communication environment. They disable individuals from making effective use of the social cues that reliably guide diverse citizens to positions consistent with the best available evidence when their science communication environment is not polluted with such meanings.
Accordingly, to dispel controversy over decision-relevant science, we need to protect and repair the science communication environment. There are different strategies—evidence-based ones—for doing that. I’d divide them into “mitigation” strategies and “adaption” ones.
Last point. In saying that SCE is right and CSL wrong, I don’t mean to be saying that it is a mistake to improve science comprehension!
On the contrary. A high degree of civic science literacy is critical to the well-being of democracy.
But in order for a democratic society to realize the benefit of its citizens’ civic science literacy, it is essential to protect its science communication environment form the toxic cultural meanings that effectively disable citizens’ powers of critical reflection.