Key Insight

Gave talk Wednesday at AGU meeting in San Francisco. Slides here. I was on panel w/ a bunch of talented scholars doing really great studies on teaching climate science. The substance of what to teach (primarily in context of undergraduate science courses) was quite interesting but what was really cool was the their (data-filled) account of the “test theory” issues they ... Read more

Gave talk Wednesday at AGU meeting in San Francisco. Slides here. I was on panel w/ a bunch of talented scholars doing really great studies on teaching climate science. The substance of what to teach (primarily in context of undergraduate science courses) was quite interesting but what was really cool was the their (data-filled) account of the “test theory” issues they are attacking in developing valid, reliable, and highly discriminant measures of “climate science literacy” (“earth is heating up,” “humans causing,” “we’re screwed,” they recognized, don’t reliably measure anything other than the attitude “I care/believe in global warming”). My talk wasn’t on how to impart climate science literacy but rather on what needs to be done to assure that a democratic society gets the full value out of having civically science literate citizens: protect the science communication environment– a matter that making citizens science literate does not itself achieve. (Gave another talk later at The Nature Conservacy’s “All-Science” event but will have to report on that “tomorrow.”) Here’s what I more-or-less remember saying at AGU:

If this were the conversation I’m usually a part of, then I’d likely now be playing the role of heretic.

That discussion isn’t about how to teach climate science to college students but rather about how to communicate climate risks to the public.

The climate-risk communication orthodoxy attributes public controversy over global warming to a deficit in the public’s comprehension of science. The prescription, on this view, is to improve comprehension—either through better science education or through better public science communication.

I’ll call this the “civic science literacy” thesis (or CSL).

I’m basically going to stand CSL on its head.

Public controversy, I want to suggest, is not a consequence of a deficit in public science comprehsnion; it is a cause of it. Such controversy is a kind of toxin that disables the normally reliable faculties that ordinary citizens use to recognize valid decision-relevant science.

For that reason I’ll call this position the “science communication environment” thesis (or SCE).  The remedy SCE prescribes is to protect the science communication environment from this form of contamination and to repair it when such protective efforts fail.

This account is based, of course, on data—specifically a set of studies designed to examine the relationship between science comprehension and cultural cognition .

“Cultural cognition” refers to the tendency of people to conform their perceptions of risk to ones that predominate in important affinity groups—ones united by shared values, cultural or political. Cultural cognition has  been shown to be an important source of cultural polarization over climate change and various other risks.

In a presentation I made here a couple of years ago, I discussed a study that examined the connection between cultural cognition and science literacy, as measured with the standard NSF Science Indictors battery.In it, we found that polarization measured with reference to cultural values, rather than abating as science literacy increases, grows more intense.

This isn’t what one would expect if one believed—as is perfectly plausible—that cultural cognition is a consequence of a deficit in science comprehension (the CSL position).

The result suggests instead an alternative hypothesis: that people are using their science comprehension capacity to reinforce their commitment to the positions on risk that predominate in their affinity groups, consistent with cultural cognition.

That hypothesis is one we have since explored in experiments. The experiments are designed to “catch” one or another dimension of science comprehension “in the act” of promoting group-convergent rather than truth- or science-convergent beliefs.

In one, we found evidence that “cognitive reflection”—the disposition to engage in “slow” conscious, analytical reasoning as opposed to “fast” intuitive, heuristic reasoning—has that effect.

But the study I want quickly to summarize for you now involves “numeracy” and cultural cognition. “Numeracy” refers not so much to the ability to do math but to the capacity and disposition to use quantitative information to draw valid causal inferences.

In the study, we instructed experiment subjects to analyze results from an experiment. Researchers tested the effectiveness of a skin rash cream to a “treatment” condition and a “control” condition. They recorded the results in both conditions.  Our study subjects were then supposed to figure out whether treatment with the skin cream was more likely to make the patients’ rash “better” or “worse.”

This is a standard “covariance detection” problem. Most people get the wrong answer because they use a “confirmatory hypothesis” testing strategy: they note that more patients’ rash got better than worse in the treatment; also that more got better in the treatment than in the control; and conclude the cream makes the rash get better.

But this heuristic strategy ignores disconfirming evidence in the form of the ratio of positive to negative outcomes in the two conditions.  Patients using the skin cream were three times more likely to get better than worse; but those using not using the skin cream were in fact five times more likely to get better. Using the skin cream makes it more likely that that the rash will get worse than not using it.

By manipulating the column headings in the contingency table, we varied whether the data, properly interpreted, supported one result or the other. As one might expect, subjects in both conditions scoring low in numeracy were highly likely to get the wrong answer on this problem, which has been validated as a predictor of this same kind of error in myriad real-world settings. Indeed, subjects weren’t likely to get the “right” answer only if they scored in about the 90 th percentile on numeracy.