What would a *valid* measure of climate-science literacy reveal? Guy et al., Effects of knowledge & ideology part 2
This is part 2 of my “journal club report” on the very interesting paper Guy, S., Kashima, Y., Walker, I. & O'Neill, S. Investigating the effects of knowledge and ideology on climate change beliefs. European Journal of Social Psychology 44, 421-429 (2014).
GKW&O correllate a sample of 300 Australians’ “climate literacy” scores with their cultural worldviews & their “belief in” human-caused climate change and related perceptions.
Last time I explained why I didn’t understand how GKW&O could construe their data as suggesting that “knowledge can play a useful role in reducing the impact of ideologies on climate change opinion.”
In some sense, this statement is a tautology: insofar as “knowledge” is defined as accepting evidence that “human beings are causing climate change,” then, of course increasing the “knowledge” of individuals who are ideologically predisposed to be skeptical will “reduce” their skepticism, (that’s what GKW&O are getting at) and thus mute ideological polarization.
That claim is empty: it's like saying "getting skeptics to believe evidence in climate change would help to counteract skepticism."
The question is how to “increase knowledge” of those who are culturally predisposed to dismiss valid evidence of climate change.
GKW&O imply that all one has to do is communicate the “facts” about climate change to them.
But nothing in their data suggest that would be a particularly useful strategy.
That’s what climate advocates have been focusing on for over a decade. And notwithstanding that, people remain culturally polarized on what the facts are.
The best explanation for that—one supported by ample observational and experimental data—is that individuals selectively credit or discredit information on climate change based on its consistency with their cultural predispositions.
If this is what's going on, then one would expect to see a correlation between ideology (or cultural worldviews) & "knowledge" of the evidence of human-caused climate change.
That’s exactly that GKW&O’s own data in fact show.
Maybe I’m missing something and either they or others will point out what it is!
Okay-- that was last time!
But now I'd like to I’d like to address GKW&O's “climate literacy” scale.
I’m really interested in this aspect of their cool paper b/c how to measure what people’s comprehension of climate change science is a problem I myself have been trying to solve recently.
Validly measuring what people actual understand about climate change is in fact a very difficult thing to do!
There are two related reasons for this. One is that, in general, people’s perceptions of societal risks reflect general affective orientations—pro- or con- -- toward the putative risk source. Any more specific perception one assesses—how large the risk is, whether there are an offsetting benefits, etc.—will be an expression of that (Loewenstein et all. 2000).
Accordingly, if one tries to measure what people “know” about the putative risk sourcein question, what one's really likely to be measuring is just their pro- or con- affect toward it. There's little reason to think their emotional response to the risk source reflects genuine comprehension of the evidence. On the contrary, people’s understanding of what the “evidence” is on an environmental and health risk (nuclear power generation, smoking, contaminated ground water, etc.) is more likely to be a consequence of than a cause of their affective orientation toward it (Slovic et al. 2004).
The second problem—one that clearly comes into play with climate change—is that individuals’ affective orientation toward the putative risk source is itself likely to be a measure or expression of their cultural worldview, which invests the asserted risk with cultural meanings.
Affect—a spontaneous perception or feeling—is the cognitive mechanism through which worldviews shape risk perceptions (Peters, Burraston, & Mertz 2004; Kahan 2009).
Accordingly, when one asks people whether they “agree” or “disagree” with propositions relating to a putative risk source, the responses will tend to reflect their worldviews. Such items won’t be measuring what people know; it will be measuring, in effect, who they are, culturally speaking.
This is exactly what scholarly researchers who’ve investigated public “climate literacy” have repeatedly found (Tobler, Visschers, & Siegrist 2012; Reynolds et al. 2010; Bostrom et al. 1994; ). Their studies have found that the individuals who tend to get the right answer to questions about the contribution of human activities to climate change (e.g., that burning fossil fuels increases global temperatures) are also highly likely to give the wrong answers to questions about the contribution of other environmentally damaging behavior that are in fact unrelated to climate change (e.g., industrial sulfur emissions).
The people who tend to get the latter questions right, moreover, are less likely to correctly identify which human activities do in fact contribute to global warming.
The conclusion of these studies is that what people “believe” about climate change doesn’t reflect what they “know” but rather reflects a more general affective orientation—pro or con- -- toward environmental risk, the sort of stance that is itself known to be associated with competing worldviews.
In my Measurement Problem paper, I present the results of a “climate science comprehension” test that includes various features designed to unconfound or disentangle affective indicators of people’s identities from their actual knowledge of climate science. The items were more fine-grained than “are humans causing climate change,” and thus less proximate to the antagonistic meanings that evoke identity-expressive responses to questions about this topic.
In addition, the “true-false” propositions comprising the battery were introduced with the phrase “Climate scientists believe . . . .” This device, which has been used to mitigate the cultural bias of test items on evolution when administered to highly religious test takers, distances the respondent from the response, so that someone who is culturally predisposed to skepticism can reveal his or her awareness of the prevailing expert opinion without being put in the position of making an “affirmation” of personal belief that denigrates his or her identity.
This strategy seemed to work pretty well. I found that there wasn’t the sort of bimodal distribution that one gets when responses to test items reflect the opposing affective orientations of test-takers.
Even more important, scores on the instrument increased in step with respondents’ scores on a general science comprehension test regardless of their political ideology.
This is important, first, because it helps to validate the instrument—one would expect those who are better able to acquire scientific information generally would acquire more of it about climate change in particular.
Second and even more important, this result confirmed that the test was genuinely measuring what people know and not who they are. Because items on “belief in” climate change do measure cultural identity rather than knowledge, responses to them tend to become more polarized as people become more proficient in one or another of the reasoning dispositions associated with science comprehension. In the Measurement Problem “climate science literacy” battery, high science-comprehending test-takers scored highest precisely because they consistently gave correct answers to items that they would have gotten wrong if they were responding to them in a manner that expressed their cultural identities.
But my scale is an admittedly a proto- assessment instrument, a work-in-progress.
I was excited, then, to see the GKW&O results to compare them with my own.
GKW&O treat their “climate literacy” battery as if were a valid measure of knowledge (they call it a “specific [climate change] knowledge” measure, in fact).
Did they succeed, though, in overcome problem researchers have had with the entanglement between affect and identity, on the one hand, and knowledge, on the other?
Frankly, I can’t tell. They don’t report enough summary data about the responses to the items in their battery, including their individual correlations with “belief in” climate change and with cultural worldviews.
But there is good reason to think they didn’t succeed.
GKW&O asked respondents to indicate which of nine human activities are & which are not “causes” of climate change:
- nuclear power generation
- depletion of ozone in the upper atmosphere
- pollution/emissions from business and industry
- destruction of forests
- people driving their cars
- people heating and cooling their homes
- use of chemicals to destroy insect pests
- use of aerosol spray cans
- use of coal and oil by utilities or electric companies
They reported that the “true” cause items (in green above) and the “false” cause ones (red) did not form a reliable, unitary scale:
Internal reliability was somewhat less than satisfactory (α = .60). To investigate this issue, items were divided to form two subscales according to whether they represented ‘causes’ or ‘non causes’ and then reanalyzed. This considerably improved the reliability of the scales (α = .89 for ‘knowledge of causes’ scale and α = .75 for the ‘knowledge of non causes’ scale). However, the distributions of the separated scales were highly skewed. Thus, it was decided to proceed with the 9-item knowledge scale, which had a more acceptable distribution.
In other words, the item covariances were more consistent with the inference that they were measuring two separate dispositions: one to correctly identify “true causes” and the other to correctly identify “false causes.”
The items didn’t form a reliable measure of a single latent trait—one reflecting a disposition to give consistently correct responses on the “causes” of climate change—because respondents who did well on the “true cause” scale were not the ones who did well on the “false cause” ones & vice versa.
Who were these two groups of respondents? It’s not possible to say because, again, GKW&O didn’t report enough summary data for a reader to figure this out.
But the pattern is certainly consistent with what one would expect to see if individuals culturally predisposed to climate belief did better on the “true cause” items and those culturally predisposed to climate skepticism better on the “false cause” ones.
In that case, one would conclude that the GKW&O “climate literacy” battery isn’t a valid measure of knowledge at all; it would be just a conglomeration of two oppositely valenced affective measures.
GKW&O report that the “score” on their conglomerate battery did correlate negatively with both cultural “hierarchy” and cultural “individualism.”
This could have happened, consistent with my surmise, because of the conglomerate scale had more “true cause” than “false cause” items, and thus more climate-concerned than climate-skeptical affect items. The effect this imbalance would have created in the correlation between “number correct” and the cultural worldview scales would have been magnified if on, say, the “nuclear power” question, subjects of both types were more evenly divided (a result I’ve sometimes observed in my own work).
But I am admittedly conjecturing here in trying to discern exactly why GKW&O’s “specific knowledge” battery failed to display the characteristics one would demand of a valid measure of climate-science knowledge. The paper didn’t report enough results to be sure.
I hope GKW&O will say more about this issue—maybe even in a guest blog here!—since these are really interesting issues and knowing more about their cool data would definitely help me and others who are struggling to try to overcome the obstacles I identified to constructing a valid climate-science comprehension measure.
I’m still working on this problem, btw!
So in closing, I’ll show you the results of some additional candidate “climate science literacy” items that I recently tested on a diverse sample of Southeast Floridians.
I used the same “identity-knowledge disentanglement” strategy with these as I did with items in the Measurement Problem battery. I think it worked in that respect.
And I think the results support the following inferences:
1. Neither Rs nor Ds know very much about climate change.
2. Both have “gotten the memo” that climate scientists believe that humans are causing climate change and that we face serious risks as a result.
3. It’s crazy to think that that ideological variance in “belief in” human-caused climate change has anything to do with a knowledge disparity between Rs and Ds.
What do you think?
Bostrom, A., Morgan, M.G., Fischhoff, B. & Read, D. What Do People Know About Global Climate Change? 1. Mental Models. Risk Analysis 14, 959-970 (1994).
Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).
Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).
Reynolds, T.W., Bostrom, A., Read, D. & Morgan, M.G. Now What Do People Know About Global Climate Change? Survey Studies of Educated Laypeople. Risk Analysis 30, 1520-1538 (2010).
Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).
Tobler, C., Visschers, V.H.M. & Siegrist, M. Addressing climate change: Determinants of consumers' willingness to act and to support policy measures. Journal of Environmental Psychology 32, 197-207 (2012).