follow CCP

Recent blog entries
« What would a *valid* measure of climate-science literacy reveal? Guy et al., Effects of knowlege & ideology part 2 | Main | Scaling up the SENCER solution to the "self-measurement paradox" »
Thursday
Aug072014

Does "climate science literacy trump ideology" in Australia? Not as far as I can tell! Guy et al., Effects of knowlege & ideology part 1


It was so darn much fun to report my impressions on Stocklmayer, S. M., & Bryant, C. Science and the Public—What should people know?, International Journal of Science Education, Part B, 2(1), 81-101 (2012), that I thought I’d tell you all about another cool article I read recently:

Guy, S., Kashima, Y., Walker, I. & O'Neill, S. Investigating the effects of knowledge and ideology on climate change beliefs. European Journal of Social Psychology 44, 421-429 (2014).

1.

GKW&O report the results of an observational study (a survey, essentially!) on the respective contributions that cultural cogntion worldviews and “climate science literacy” make to belief in human-caused global warming and to understanding of the risks it poses.

Performing various univariate and multivariate analyses, they conclude that both cultural worldviews and climate science literacy (let’s call it) have an effect.

Might not sound particularly surprising.

But it is critical to understand that the GKW&O study is a contribution to an ongoing scholarly conversation.

It is a response, in fact, to Cultural Cognition Project (CCP) researchers and others who’ve conducted studies showing that greater “science literacy,” and higher proficiency in related forms of scientific reasoning (such as numeracy and critical reflection), magnify cultural polarization on climate change risks and related facts.

The results of these other studies are thought to offer support for the “cultural cognition thesis” (CCT), which states, in effect, that “culture is prior to fact.”

Individuals’ defining group commitments, according to CCT, orient the faculties they use to make sense of evidence about the dangers they face and hwo to abate them.

As a result, individuals can be expected to comprehend and give appropriate effect to scientific evidence only when engaging that information is compatible with their cultural identities.  If the information is entangled in social meanings that threaten the status of their group or their standing within it, they will use their reasoning powers to resist crediting that information.

Of course, “information” can make a difference!  But for that to happen, the entanglement of positions in antagonistic cultural meanings must first be dissolved, so that individuals will be relieved of the psychic incentives to construe information in an identity-protective way.

GKW&O meant to take issue with CCT.

The more general forms of science comprehension that figured in the CCP and other studies, GKW&O maintain, are only “proxy measures” for climate science comprehension.  Because GKW&O measure the latter directly, they believe their findings supply stronger, more reliable insights into the relative impact of “knowledge” and “ideology” (or culture) on climate change beliefs.

Based on their results, GKW&O conclude that it would be a mistake to conclude that “ideology trumps scientific literacy.” 

“The findings of our the findings of our study indicate that knowledge can play a useful role in reducing the impact of ideologies on climate change opinion.”

2.

There are many things to like about this paper! 

I counted 487 such things in total & obviously I don’t have time to identify all of them. I work for a living, after all.

But one includes the successful use of the cultural cognition worldview scales in a study of the risk perceptions of Australians

Oh—did I not say the GKW&O collected their data from Australian respondents?  I should have!

I’ve discussed elsewhere some “cross-cultural cultural cognition” item development I had helped work on.  Some of that work involved consulation with a team of researchers adapting the cultural cognition scales for use with Australian samples.

So it’s really cool now to see Australian researchers using the worldview measures (which GKW&O report demonstrated a very high degree of scale reliability) in an actual risk-perception study.

Another cool thing has to do with the GKW&O “climate literacy” battery.  In fact, there are multiple cool things about that part of the study.

I’m very excited about this aspect of the paper because, as is well known to all 16 billion readers of this blog (we are up 4 billion! I attribute this to the Ebola outbreak; for obvious reasons, this blog is the number one hit when people do a google search for “Ebola risk”), I myself have been studying climate science comprehension and its relation to political polarization on “belief” in human-caused climate change and related matters.  I find it quite interesting to juxtapose the results of GKW&O with the ones I obtained.

3.

But before I get to that, I want to say a little more about exactly what the GKW&O results were.

In fact, the data GKW&O report don’t support the conclusion that GKW&O themselves derive from them. 

On the contrary, they reinforce the cultural cognition thesis.

GKW&O are incorrect when they state that general science comprehension was conceptualized as a “proxy” for climate change literacy in CCP study, Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012) ( Nature Climate Change study),  that they are responding to.

On the contrary, the entire premise of the Nature Climate Change study was that members of distinct cultural groups differ in their climate science literacy:  they are polarized on the significance of what the best available evidence on climate change signifies.

The point of the study was to test competing hypotheses about why they we aren’t seeing public convergence in people’s understanding of the best available evidence on global warming and the dangers it poses.

One hypothesis—the “science comprehension thesis” (SCT)—was that the evidence was too hard for people to get. 

People don’t know very much science.  What’s more, they don’t think in the systematic, analytical fashion necessary to make sense of empirical evidence but instead really on emotional heuristics, including “what do people like me think?!”

The use of a general science comprehension predictor in the study was selected as appropriate for testing the SCT hypothesis. 

If SCT is right—if public confusion and conflict over climate change is a consequence of their over-reliance on heuristic substitutes for comprehension of the evidence—then we would expect polarization to abate as members of culturally diverse groups become more science literate and more adept at the forms of critical reasoning necessary to understand climate science.

But that’s not so. Instead, as general science comprehension increases, people become more polarzied in their understandings of the significance of the best evidence on climate change.

So this evidence counts against the SCT explanation for public contorversy over climate change.

By the same token, this evidence supports the “cultural cognition thesis”—that “culture is prior to fact”: if critical reasoning is oriented by and otherwise enabled by cultural commitments, then we’d expect people who who are more proficient at scientific reasoning to be even more adept at using their knowledge and reasoning skills to find & construe evidence supportive of their group’s position.

There is nothing in GKW&O that is at all at odds with these inferences. 

On the contrary, the evidence they report is precisely what one would expect if one started with the cultural cognition thesis.

They found that there was in fact a strong correlation between their respondents’ cultural worldviews and their “climate science literacy.” 

Hamilton et al.: More science literacy, more polarization on what climate science saysThat is what the cultural cognition thesis predicts: culturally diverse individuals will fit their understanding of the evidence to the positition that predominates in their group.

It's exactly what other studies have found.

And it was, as I said, the premise of the Nature Climate Change study.

Of course, in itself, this correlation is consistent with SCT, too, insofar as cultural cognition could be understood to be a heuristic reasoning alternative to understanding and making use of valid scientific information.

But that’s the alternative explanation that the  Nature Climate Change study—and others—suggest is unsupported: if it were true, then we’d expect culturally diverse people to converge in their assessments of climate change evidence, not become even more polarized, as they become more science comprehending.

The basis for GKW&O’s own interpretation of their data—that it suggests “information” can “offset” or “attenuate” the polarizing impact of cultural worldviews—consists in a series of multivariate regression analyses. The analysies, however, just don't support  their inference.

There is, of course, nothing at all surprising about finding a correlation between “climate science literacy”—defined as agreement with claims about how human activity is affecting the climate—and “belief in human caused climate change.”

Indeed, it is almost certainly a mistake to treat them as distinct.  People generally form generic affective orientations toward risks. The answers they give to more fine-grained questions—ones relating to specific consequences or causal mechanisms etc.—are just different expressions of that

In our study of science comprehension & climate change beliefs, we used the “Industrial Strength Risk Perception Measure” because it has already been shown to correlate 0.80 or higher w/ any more specific “climate change” question one might ask that is recognizable to people, including whether global warming is occurring, whether humans are causing it, and whether it is going to have bad consequences. 

Psychometrically, all of these questions measure the same thing.

GKW&O conclude that the effect of cultural worldviews and climate-science literacy are “additive” in their effect on climate change “beliefs” because their climate-science literacy scale correlates with “belief climate change is occurring” and “belief climate change is human caused” even after “controlling” for cultural world views.

But obviously when you put one measure of an unobserved or latent variable on the right-hand (“independent variable”) side of a regression formula and another measure of it on the left (“dependent” or “outcome variable”) side, the former is going to “explain” the latter better than anything else you include in the model! 

At that point, variance in the unobserved variable (here an affective attitude toward climate change) is being used to “explain” variance in itself.

The question is –what explains variance in the latent or unobserved variable for which “belief” in human caused climate change and the climate literacy scale items are both just indicators?

As noted, GKW&O’s own data support the inference that cultural worldviews—or really the latent sense of group identity for which the worldview variables are indicators!—does.

GKW&O also present a regression analysis of “beliefs” in climate change that shows that there are small interactions between the cultural outlook scales and their measure of climate-science literacy. 

Because in one of the models, the interaction between climate-science literacy and Individualism was negative, they conclude that “knowledge dampen[s] the negative influence of individualist ideology on belief in climate change.”

An interaction measures the effect of one predictor conditional on the level of the other.  So what GKW&O are reporting is that if relatively individualist people could be made to believe in evidence that humans cause climate change, that increased belief would have an even bigger impact on whether they believe climate change is happening than it would on relative communitarian people.

It’s very likely that this result is a mathematical artifact: since communitarians already strongly believe in climate change, modeling a world in which communitarians believe even more strongly that humans are causing it necessarily has little impact; individualists, in contrast, are highly skeptical of climate change, so if one posits conditions in which individualists “believe” more strongly that humans are causing climate change, there is still room left in the scale for their “belief in human caused climate change” to increase.

But even if we take the result at face value, it doesn’t detract at all from the cultural cognition thesis.

Yes, if a greater proportion of individualists could be made to believe that scientific evidence shows humans are causing climat echange, then more of them would believe in climate change. 

The question, though, is why don’t they already believe the evidence? 

GKW&O’s own data suggest that cultural worldviews “explain” variance in acceptance of evidence on climate change. 

And we know that it’s not plausible to say that the reason individualists don’t believe the scientific evidence isn’t that they can’t understand it: in the real world, as they become more science comprehending and better at critical reasoning, persons with these outlooks become even more skeptical.

Finally, there are now considerable experimental data showing that people—of all cultural outlooks—selectively credit and discredit evidence on climate change and other culturally polarized issues conditional on whether it supports or conflicts with the view that is predominant in their group.  Indeed, the more science comprehending, the more numerate, and the more cognitively reflective they are, the more aggressively they culturally filter their appraisals of empirical evidence.

GKW&O in fact recognize all of this.

At the end of the paper, they qualify their own conclusion that “specific climate change knowledge positively influences people’s belief in climate change,” by noting that “it is possible the reverse is true”: their correlational data are just as consistent with the inference that individuals are selectively crediting or discrediting evidence based on its cultural congeniality, a process that would produce precisely the correlation they observe between cultural worldviews and “climate science literacy.” 

As I indicated, that’s the causal inference best supported by experimental data.

But none of this detracts from how interesting the study is, and in particular how intriguing GKW&O’s data on climate-science literacy are.

I’ll have more to say about that “tomorrow”! 

Part 2

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (11)

Dan -

Another off-topic comment. And I know that you don't like pop-stuff, but I don't know if you've seen this and if you haven't, I thought you might find it interesting:

http://www.psychologytoday.com/blog/rabble-rouser/201310/liberal-bias-in-social-psychology-personal-experience-ii

Seems to me to be shallow and flawed, btw (e.g., comparing acceptance/rejection from different journals, no mention of the reasons given for rejection or requests for revisions, etc.,) - and it's impossible to do much evaluation w/o seeing the article in question), but the dude does seem to have some heavy credentials).

August 7, 2014 | Unregistered CommenterJoshua

@Joshua

I don't really have problem w/ "pop"-- I have problem w/ story telling: take grab bag of heuristics & biases, add water & stir-- instant decision science!

If this guy thinks his "field" has produced more evidence of conserv bias, then he himself is reading pretty selectively.

As for "liberals" in psychology -- there are more serious problems to worry about...

August 7, 2014 | Registered CommenterDan Kahan

Joshua,

I dunno. I haven't bothered to read the paper, but it sounds like a kind of iffy experiment to me. I've seen this same sort of experiment done before, and the problem I identified with it was that there was an unjustified and untested assumption that the issues were equally polarised. If liberals care a lot more about affirmative action than conservatives care about homosexuality, then liberals would be more influenced by their biases in that case. For a different question, they might not be. I don't know - they might have controlled for that somehow. But it might just be that the conclusion was rejected because it couldn't be logically justified. Dubious.

August 7, 2014 | Unregistered CommenterNiV

Dan -

==> "If this guy thinks his "field" has produced more evidence of conserv bias, then he himself is reading pretty selectively."

Yeah - I wondered about that. I was curious about his offhanded claim. It would be interesting to know if there are any meta-analyses. On what basis does he make that claim?

The more I thought about that article the weirder it seemed.

I was struck by this passage.

==> "Neither I nor Jarret would or did claim that such a pattern is always necessarily true. But it was true in our data. We were just not permitted to say so."

I wouldn't be shocked if there were some kind of a systematic bias as he insinuates (i.e., an bias due to the predominance of ideological orientation of journal editors), but he seems to take an odd approach for a scientist. He really has no evidence that they "[weren't] permitted to say so." He might have evidence consistent with that conclusion, but he can't make that conclusion.

On the larger scale, if he wants to show a systematic political bias in journal editors, he could certainly design a study to examine for such an effect - perhaps along the lines of his experiment but scaled-up. But why would he even do his little experiment, with a sample size of freakin' 1, if he weren't (unscientifically) seeking to generalize from weak weak evidence of causality?

Suppose his paper was rejected by a couple of journals because the editors didn't like what the data showed after they changed the first paragraph? Would that then support a broader conclusion that there is no systemic bias even though the results were the same before and after the change in the experimental condition that he thinks showed bias? Of course not. In fact, in that case the rejection would be stronger evidence of bias.

Suppose they had tried a 3rd journal (without changing the paragraph) that happened to have a conservative editor who accepted the paper because of pro-conservative bias on his/her part. If that had happened, would his conclusion be that there is no biasing effect from editors' ideology when, in fact, the opposite causality was in play with the 3rd attempt?

And what's up with the following?

==> "We could not get this published. It was rejected at two separate journals. "

They got rejected by two journals and concluded that they "could not get this published?" Two?

Clearly - he was looking to find a certain effect. OK. It might be interesting to see if there is such an effect - but it would only be meaningful if the effect were true on a larger scale. If it happened in one case, what would it matter, really - unless you're just looking only to confirm our own biases?

NiV -

Yeah - that's one aspect that makes the study he was trying to get published seem dubious. If they didn't control for the effect you described, it seems there would likely be other ways that the study's methodology lacked sufficient controls.

August 8, 2014 | Unregistered CommenterJoshua

"On the larger scale, if he wants to show a systematic political bias in journal editors, he could certainly design a study to examine for such an effect"

This was a personal anecdote in some sort of a blog post. I wouldn't take it too seriously. He was just venting.

There's a more systematic study of possible biases in social psychology in this paper, which I mentioned on the last post. I think they did mention one test being done of peer-reviewer bias.
http://journals.cambridge.org/images/fileUpload/documents/Duarte-Haidt_BBS-D-14-00108_preprint.pdf

Again, I've not chased the referenced paper down. That humans are biased and peer reviewers are human seems such a statement of the obvious to me...

August 8, 2014 | Unregistered CommenterNiV

==> " That humans are biased and peer reviewers are human seems such a statement of the obvious to me..."

Well sure. But what's not obvious is the degree to which those factors create a mechanism that biases the literature on the whole.

==> "This was a personal anecdote in some sort of a blog post. I wouldn't take it too seriously. He was just venting."

I dunno. A serious scientist unscientifically "venting" on a public blog associated with his profession about his academic process in a way that promotes "stories" about bias in academe and which supports partisan identifications that polarize society...

In itself too big a deal? No, of course not. As you say, he's human and humans are biased. But it symptomatic of a larger issue - and it's weird to me that a serious scientist would engage at that level. Would you?

August 8, 2014 | Unregistered CommenterJoshua

Isn't venting on blogs about the behaviour of scientists what I do?

August 8, 2014 | Unregistered CommenterNiV

==> "Isn't venting on blogs about the behaviour of scientists what I do?"

Based on implying conclusions that obviously lack a solid grounding in evidence? Perhaps, although I'd like to hope not.

August 8, 2014 | Unregistered CommenterJoshua

"Based on implying conclusions that obviously lack a solid grounding in evidence?"

Depends how much you choose to read into it. It's clearly labelled as a "personal experience" - anecdote in other words. Virtually the entire article just reports what happened - speculations are marked as such. He notes himself the limitations on the conclusions being reported: "Neither I nor Jarret would or did claim that such a pattern is always necessarily true." The only unjustified definitive statement of fact I can see to object to is a single sentence "We were just not permitted to say so", and it is immediately clarified with the explanation of what this means: "We just could not get the paper published when we did say that and we did get it published when we did not say that." Any further conclusions to be drawn from the incident are entirely the readers own - although I agree it's pretty obvious what you're expected to infer.

Readers are not stupid. They can see it's based on a single weak anecdote and can judge accordingly. Even I, motivated as I am to find the conclusion plausible, thought it was pretty weak. It's obviously just some guy sounding off on his blog, like thousands of others do. So what's the problem?

Why are you so bothered about it?

August 8, 2014 | Unregistered CommenterNiV

==> "Why are you so bothered about it?"

I'm not. Seems that you want to read it that way, perhaps?

I think it's weird that a scientist would do what he did. Doesn't "bother" me. Does everything you think is weird "bother" you?

Are you asking me why I think it's weird? I explained above.

==>? "He notes himself the limitations on the conclusions being reported: "Neither I nor Jarret would or did claim that such a pattern is always necessarily true.""

???

That was in reference to the study. I'm talking about his weird experiment - the results of which couldn't support any kind of a conclusion one way or another. His statement about the limitations of his conclusion are not related to his experiment. He doesn't state qualifications about that conclusion. He states that they "couldn't" get the study published until they changed the paragraph - and clearly implies that he only reason it was eventually published was because they changed that paragraph. I think it's weird to perform such an experiment because it has no meaningful result - except to say that one thing happened in one particular situation - and it couldn't possibly support his conclusion (stated without qualification).

That's weird like him saying that they "couldn't get it published" because two journals rejected it. A bias in search of a confirmation - exactly like his experiment.

Why would he perform his little experiment, the results of which are meaningless in any real sense, unless he is simply seeking to confirm a bias? A scientist, one would think, would eschew such a process. If he has a hypothesis that the literature is biased by virtue of (left wing) bias on the part of journal editors (an entirely plausible hypothesis), he can design a study to get valid results. But why waste time with a meaningless experiment, and then further blog about it with a clear indication that the results might be generalizable? He might have simply blogged that he had a hypothesis about the impact of bias on the part of journal editors and left it at that - no need to describe his experiment which can't add meaningful information.

==> "...and it is immediately clarified with the explanation of what this means: "We just could not get the paper published when we did say that and we did get it published when we did not say that."

Clarified? That's pretty funny. That statement clarifies nothing. It just restates the unsupported conclusion of the previous statement. The "we just could not get the paper published" is ridiculous. That's like saying that I could not hike to the top of a mountain because I chose to stop trying after a few hundred feet. It's like when I used to take kids hiking up mountains. After a few hundred feet they'd say "I have to stop because I can't do it," only to see later that if they made the effort in fact they could. No - it wasn't that they "couldn't" get it published - it was that they "didn't" get it published. Again, their "clarification" clarified nothing.

==> "Any further conclusions to be drawn from the incident are entirely the readers own"

Not at all. The conclusion that he states (that they "couldn't" get it published because of what that one paragraph stated), is unsupported. It would be consistent with such a conclusion - but to state that conclusion is unscientific - and it is stated entirely independently of what the reader might do or not do subsequently.

And with that, NiV - I'll leave this discussion. It seems to me that I'm having to repeat the same points w/o you really addressing them. My experience is that when I feel I've reached that point in discussions with you, there is no benefit to me from continuing.

August 8, 2014 | Unregistered CommenterJoshua

"Why would he perform his little experiment, the results of which are meaningless in any real sense, unless he is simply seeking to confirm a bias?"

He wasn't performing an experiment. He was trying to get his paper published.

They did an experiment apparently showing that liberals were more biased, which they thought was pretty interesting and so wrote up in a paper. They tried to get their paper published and were twice rejected. They hypothesised that they were being rejected because their conclusion was unpopular with liberals and the reviewers were being biased. So they took out all reference to liberals being more biased, and resubmitted. The paper immediately got published, even though the data was exactly the same and there was now less content. The only thing different (supposedly) was that they didn't draw attention to the liberal bias.

If peer review is a deterministic function of the paper contents, as some people think it ought to be, then the only thing that could possibly have affected the decision was the liberal bias conclusion inclusion. If peer review is to some extent random, or subject to other factors besides the contents, then the evidence is weaker. Although it hadn't actually been their intention to perform an experiment here - they were only trying to get their paper published - it arguably constitutes one.

If the probability of passing peer review was a constant p, then the highest probability for the observed events occurs at p = 1/3 and is about 0.148. If you choose as your alternative hypothesis that acceptance was simply down to whether the paper criticised liberals, the probability of this outcome would be 1. The Bayes factor for this comparison is P(Obs|H1)/P(Obs|H2) = 1/0.148 = 6.75, or taking logarithms about 8.3 dB. That's substantial, although not conclusive. Personally I'd want at least 13 dB before I started taking it seriously.

If you took a somewhat weaker alternative hypothesis (as seems more reasonable) then it would be a bit lower. However, it's not negligible/meaningless, either. It's about half way to a substantive result. Although given, as I said, that humans are biased and peer reviewers are human, not a very surprising one. (Unless, perhaps, you're a liberal!)
;-)

Obviously it's a matter of taste, but personally I don't find it weird at all. It's just some researcher venting about having to take unpopular conclusions out to get a paper past peer review. Climate sceptics make the same complaint. It's a common perception, and I doubt it's based on this one single incident alone.

August 9, 2014 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>