A curious correspondent posed these questions to me relating to scores on the "ordinary climate science intelligence" assessment:
My question is about the last figure in your posting here on your OCSI instrument and results.
The last figure is a historgram of the No. correct (on your OCSI instrument?) personal beliefs about warming causes (human, natural, no warming).
I have several questions:
1. INTERPRETATION of final figure. Am I interpreting your result correctly by concluding that it shows that you found that those believing in no warming had more correct than those who believed in natural causes of warming, who, in turn, scored higher than those who believed in human caused warming?
I am just asking about the absolute differences, not their statistical significance.
2. SAMPLE. How big was it and who were they? (undergrads, Mechanical Turk, something else, national representative...).
3. STATS. Were the differences in that final figure significant? And, regardless of significance, can you send along the effect sizes?
You can get more information on the OCSI scale here: "Climate Science Communication and the Measurement Problem," Advances in Pol. Psych. (forthcoming). But on your queries:
1. Interpretation. The last figure is a bar chart w/ number of correct for rspts who answered standard "belief in" climate change items that asked "[f]rom what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades" [yes/no]; and (if yes), "Do you believe that the earth is getting warmer (a) mostly because of human activity such as burning fossil fuels or (b) mostly because of natural patterns in the earth’s environment?"
You are eyeballing the differences in mean scores for the 3 groups-- "no warming," "naturally caused warming" and "human warming."
But my interpretation would be that everyone did about the same. Among all respondents -- regardless of the answer they gave to "believe in" global warming items -- there was a strong tendency to attribute to climate scientists pretty much any conclusion that *sounded* consistent with global warming being serious environmental risk. Only respondents who were high in science comprehension generally avoided that mistake -- that is, identified accurately which "high risk" conclusions climate scientists have endorsed & which ones not. Those rspts successfully did that regardless of how they answered the "believe in" question.
That's why I think the responses members of the public give to surveys that ask whether they "believe in" human-caused global warming are eliciting an expression of an outlook or attitude that is wholly unrelated to what anyone knows or doesn't know about climate science or science generally. Social scientists (myself included) and pollsters haven’t really understood in the past what items like this are actually measuring: not what you know, but who you are.
2. Sample. US general population sample. Stratified for national representativeness. Recruited for on-line study by the firm YouGov, which uses sampling strategies shown to generate election result estimates at least as reliable as those generated by the major polling firms that still use random-digit dial (I'm basing this on Nate Silver's rankings). In my view, only YG & GfK use on-line sampling techniques that are valid for studying the effect of individual differences -- cognitive & political -- on risk perceptions. Mturk is definitely not valid for this form of research.
3. Stats. The diff between "no warming" & "human-caused warming" rspts was significant statistically -- but not practically. N = 2000 so even small differences will be statistically significant. The difference in the mean scores of those 2 groups of rspts was a whopping 1/3 of 1 SD. Whether respts were in "no warming," "human cauased warming" or "natural warming" classes explained about 1% of the variance in the the OCSI scores:
I reported "number of correct" in the figure b/c I figured that would be easier for readers to grasp but I scored results of the climate science literacy test with an IRT model and standardized the scores (so mean = 0, of course). In regression output, belief in "human warming" is the reference group--so their score is actually the constant.
The constant & the regression coefficients are thus the fractions of a standard deviation below or above average the different groups' performances were!
You can easily compute the means: human warmers = -0.12; natural warmers is 0.07; and no warmers 0.14.
It would be just as embarrassing --just as childish -- for "skeptics" to seize on these results as evidence that skeptics "know more" climate science as it would be for "believers" to keep insisting that a knowledge disparity explains the conflict over climate change in US society.
But if you have thoughts, reactions, comments, suggestions, disagreements, etc. -- particularly based on analyses as they appear in draft paper -- please do share them w/ me.