follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« How should science museums communicate climate science? (lecture summary & slides) | Main | Weekend update: Some research on climate literacy to check out »
Sunday
Sep072014

Weekend update: Another helping of evidence on what "believers" & "disbelievers" do & don't "know" about climate science

Data collected in ongoing work to probe, refine, extend, make sense of, demolish the "ordinary climate science intelligence" assessment featured in The Measurement Problem paper.

You tell me what it means ...

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (19)

I know what 'climate' is. It is what you do to a tree when a bear is after you. ;-)

September 7, 2014 | Unregistered CommenterEric Fairfield

It says that the categories of 'believer' and 'disbeliever,' over a series of questions do not differ from each other in Z score by enough to make them real categories. The two categories only differ in their self labeling not in what they consider the science to be.
This is interesting and reminds me of the Myers-Briggs tests which puts people into one of sixteen categories while more modern tests put the people all into a single category. Also, if I remember correctly, changing a few questions in the Myers-Briggs test moves people from category to category even though the people have not changed.
Statistically, your results seem to imply that the categories of 'believer' and 'disbeliever' should be discarded in a number of tests and possibly replaced with 'group membership' for a small number of groups. Viewed the other way around, what does it tell us that people are Manichaean in their self labeling but strongly overlapping in their beliefs. It may be that there are only two listed groups and you have to choose one even though you are in the middle, e.g. the biggest group of voters is independent but the polls only report these people as Republicans or Democrats.
From a practical point of view, it may mean that Nierenberg was right and that the two sides can be brought together by the 'salami' technique of dividing 'belief' into lots of little statements, agreeing where you can, and getting to the nub of the differences so that the argument mostly goes away except for a few points of disagreement, which might be negotiated once the rest of the salami of nominal disagreement has been disposed of. If this practical hypothesis is correct, then a number of disagreements can be simplified. I plan to try out the practical approach locally and face to face and see if it works. On the Science and Faith discussions that we had over the summer, this salami approach is expected to yield great results. We never really defined key terms in enough detail to know where disconnects might be and how much of the salami we all agreed upon.

September 7, 2014 | Unregistered CommenterEric Fairfield
September 7, 2014 | Registered CommenterDan Kahan

Isn't this more or less the same result as seen here? It's just substituted believers/non-believers for liberal/conservative, yes?

Well, as I said last time round, it means it's very hard to tell what's going on with ambiguous and disputable questions like some of these. And as I say too often, for figuring out what's going on in people's heads, it's no use asking what people believe if you don't also ask them why they believe it.

But that's just our 'Groundhog Day' discussion again.

What does it mean? It means that, on average, both sides have more-or-less the same beliefs about what climate scientists think. Not unexpected, of course, since both sides get their information about the beliefs of climate scientists from the same media.

September 7, 2014 | Unregistered CommenterNiV

@NiV:

Pretty much, now that you mention it. Which confirms what @Eric says. Or maybe one could put it this way: what you believe about climate change doesn't reflect what you know; it expresses who you are...

September 7, 2014 | Unregistered Commenterdmk38

Eric -

==> "From a practical point of view, it may mean that Nierenberg was right and that the two sides can be brought together by the 'salami' technique of dividing 'belief' into lots of little statements, agreeing where you can, and getting to the nub of the differences so that the argument mostly goes away except for a few points of disagreement, which might be negotiated once the rest of the salami of nominal disagreement has been disposed of."

At the risk of being somewhat repetitive (something that I may have risked once or twice in the past)....

I believe that this gets to focusing on discussing "interests" (which can often coincide or at least be mutually compatible) and "positions" (which tend to be polarizing and mutually exclusive).

Also, as you allude to - one of the first steps in uncovering common interests and synergies is by defining terms. What does it really mean to say that someone is a "believer" as compared to a "disbeliever?" Isn't it a fairly empty taxonomy? But I would go further, as I think that even asking people to identify "world views" and "values" nets a fairly empty return. IMO, most people mostly share values such as desiring for people not to suffer or for hard work to be rewarded, etc. - even though when asked questions about "world view" they might fall into different categories because they identify with different 'positions" on how to best realize their values.

September 7, 2014 | Unregistered CommenterJoshua

"You tell me what it means ..."

Ooh, an exam question!

Throughout this reponse, I am using correctness as a measure of which questions are "difficult" and which are "easy".

Result 1: The overall performance of both groups is structured similarly, with questions that are difficult for one group being also difficult for the other group. Interpretation 1: Both groups of the population are about equally receptive to existing avenues of science communication.

Result 2: There remains, at -much- better than 95% confidence, robust differences between the two groups.

Result 3: The relative performance between the groups is correlated with question difficulty. AGW disbelievers get the easier questions wrong more often and the more difficult questions right more often than AGW believers did. Interpretation 3: I am here tempted to attribute a greater degree of critical thought to the disbelievers than to the believers. In general, a clearer thought process is usually necessary to get harder questions right.

Result 4: The crossover point, where one group becomes better than the other, occurs at about 50% correctness.

Interpretation 4A: If interpretation 3 is correct, then there is no obvious reason why crossover correctness rate should be 50%. If a higher degree of critical thought is truly present in the disbeliever population, I would expect the crossover correctness rate to be higher than 50%. (Admittedly, the crossover correctness is significantly greater than 33%, which would be the expected percentage correct on that 3-response question.)

Intepretation 4B: An alternative interpretation more consistent with result 4 is that respondents are still picking out the responses that are closely tied to their identity, or fiercely opposing it, and marking their responses under that influence. Given the two-way polarization on climate change as an issue, we'd expect a crossover rate of 50%, because peoples' responses relative to population mean will be drawn towards their own polarization, which is where they perceive themselves to be relative to the median of the population. If this interpretation is correct, then the way in which these questions are asked does not effectively dissociate knowledge from identity, as might have been desired.

September 7, 2014 | Unregistered Commenterdypoon

I missed an important point before. Where did you get the error bars? They seem suspiciously small given the similarity of response. Thanks.

September 7, 2014 | Unregistered CommenterEric Fairfield

"Or maybe one could put it this way: what you believe about climate change doesn't reflect what you know; it expresses who you are..."

Well, it suggests that it doesn't reflect what you know about what climate scientists believe. It could still reflect what you know about other stuff - like how often climate scientists get things wrong, make data up, use dodgy statistics, etc. ... :-)

"I missed an important point before. Where did you get the error bars? They seem suspiciously small given the similarity of response. Thanks."

Those will be 'standard errors' for the mean (or 1.96 times them, depending on convention), calculated from the sample size assuming sampling is independent and uniform. Their smallness simply means a large sample size was used.

September 8, 2014 | Unregistered CommenterNiV

So 1/ sqrt(n) ?
Are the underlying distributions of 'bellievers' and 'non believers' and the wording of the questions sufficiently well understood to know whether this is the appropriate standard error? Does the fact that the groups and the questions only have two states mean that normal shape of the underlying distribution is the wrong way to look at these results. My guess is that the underlying distributions are not Gaussian or even Lorentzian but have a form that I have forgotten because I never use it. This statistical distribution that underlies the study would, I think, increase the standard error estimates and make the two groups overlap.
Who knows the details of the appropriate statistical analysis of these kinds of studies and is willing to share it?

September 8, 2014 | Unregistered CommenterEric Fairfield

@Eric:

@Eric & @NiV. In fact, I mistakenly used standard error CIs rather than 0.95 CIs. Thanks, @Eric, for discerning this.

As @NiV states, the latter are 1.96x as big as former. I've posted a graphic w/ 0.95 CIs in the "update" field.

As for calculating, I used the conventional formula for calculating the standard error of a percentage, the same one pollsters do when they say that "this survey has a 'margin of error' of +/- 3"--although they leave out "at 0.95 level of confidence," compounding public confusion about what exactly this information is conveying.

You guys can go ahead & debate of course whether this is the right way to calculate the precision of the estimated means. (I'm sure @Eric realizes that all the variables here-- types of people, right & wrong answers -- are dichotomous; there's no point thinking about Gaussian or other kinds of distributions that apply to continuous random variables here.)

As I state in the update, though, I think calculating whether the differences in the means is "significant at p = .000000-whatever" is much less infomative than noting that the two groups don't have a meaningfully different view of what climate scienitsts believe. Also, believer or nonbeliever, they tend to credit as "true" any statement that suggests those scientists attribute risk to human behavior.

p.s. I hope no one in this discussion will make the mistake of saying that "non-overlapping 0.95 confidence intervals" is the way to figure out of if two means are different at p < 0.05

September 8, 2014 | Registered CommenterDan Kahan

Dan -

==> "Also, believer or nonbeliever, they tend to credit as "true" any statement that suggests those scientists attribute risk to human behavior."

Within the framework of climate science and related to "emissions." Certainly, they wouldn't credit as true any statement?


==> "If one says one is "believer," odds are about 3:1 that one has political outlooks to left of center"

Just curios - what are the odds that if one says he/he is a "disbeliever" one has political outlooks to right of center. Looks like they are higher.

September 8, 2014 | Unregistered CommenterJoshua

@Joshua:

Necessariy the same: 3:1. I was treat as "disbeliever" is a disbeliever in "human caused" global warming. So if 75% of "left of center" believe and "25"%" of right believe, then necessarily "75%" of right disbelieve & "25%" of left.

You are likely asking what odds are if one says one disbeleves in any form of global warming, right? In that case, it looks about 5-6:1.

September 8, 2014 | Registered CommenterDan Kahan

"So 1/ sqrt(n)? Are the underlying distributions of 'bellievers' and 'non believers' and the wording of the questions sufficiently well understood to know whether this is the appropriate standard error?"

Assuming there's a population probability p and uniform, independent sampling, the number of 'successes' in a sample of size n has a Binomial distribution with mean n*p and standard deviation Sqrt(n*p*(1-p)). The standard deviation is commonly quoted as the 'standard error'.

There's no problem with the distributions of believers/non-believers. The issues are around whether p is a constant (it might vary over time from survey to survey), and whether the sampling is uniform and independent. If you sample only people of a certain subclass (e.g. only people who own computers will answer an internet poll) and this is correlated with the variable under study, you get sampling bias. It's rare that you can remove such biases altogether, so the error estimate is always at least a little larger than the SE might lead you to think. If the sampling method is poor it can be a *lot* larger.

Dan says in the figure caption that they're sampled from a limited geographic location, and doesn't say how they were picked, so you have to bear that in mind when interpreting.

The distribution is skewed, and only approximately Normal, so strictly speaking 1.96 SE is wrong. But it's a pretty good approximation so long as n is reasonably large and p is not too close to 0 or 1. Exact asymmetric 95% confidence intervals can be calculated by any statistics software. (e.g. using the R software application you can calculate them with the binconf function.)

It's true that non-overlapping confidence intervals isn't quite the right way to test for a significant difference at 95%, but as it turns out, if you're very careful about how you word it it does actually work. The way to test it is to consider the distribution of the difference between the values, which will have SD = sqrt(SD_a^2 + SD_b^2). If both SDs are roughly equal, this is about Sqrt(2) times the SD or 40% bigger. They stop overlapping at about 2 CIs or 100% bigger. So overlapping confidence intervals can still be far enough apart to differ significantly. Non-overlapping confidence intervals of roughly equal magnitude are *definitely* far enough apart.

In other words, if the similar-sized CIs *don't* overlap, they're likely to be significantly different. If the CIs *do* overlap, they might or might not be, you can't tell without a more careful calculation.

September 8, 2014 | Unregistered CommenterNiV

@NiV
Thank you very much. I did not know the details that you just put up.

September 8, 2014 | Unregistered CommenterEric Fairfield

Another thought about what the results might mean.
In Zimbardo's Stanford Prison Experiment, college students were chosen at random and assigned to be 'guards' or 'prisoners.' The two groups started to behave differently in an hour or so.
If you take the people being polled in this work and assign them at random to 'believers' and 'nonbelievers' and run the survey again,
1. Do you get statistically significant intergroup differences?
2. Are these the same differences that you reported above?
If so, it may be the assignation of the label that is important not the underlying 'belief.'
Just sayin'.

September 8, 2014 | Unregistered CommenterEric Fairfield

Dan - yes. I was asking about "no warming..." And yes, it isn't a parallel comparison to the odds on the left for believing in "human caused " warming - so thanks for pointing that out.

September 8, 2014 | Unregistered CommenterJoshua

At the moment, I am trying to understand how CAL(tm) and neural nets converge on making decisions. In neural nets, you can do backpropagation of errors and create a gradient that points the neural net in the direction of correct analysis.
In the example that I was learning from, there were four components to the error function. The total error was assumed to be the sum of the squares of the four components. The method minimizes the error function.
It seems that in the graphs above some components of the appropriate error function may be missing. If they are missing, the listed error is small than the 'real' error. For instance, random assignment of people to the believer and non-believer groups may show statistical difference between the two groups given the current measure of standard error while multiple randomly assigned groups many show larger bootstrap errors and no difference between groups. Stated different, the group means are not stable and depend on variables that were not measured. The instability of the group means might wipe out the inter group differences and say that both groups believe the same things.
Thoughts?

September 9, 2014 | Unregistered CommenterEric Fairfield

"It seems that in the graphs above some components of the appropriate error function may be missing. If they are missing, the listed error is small[er] than the 'real' error"

Yes, but it might be only very slightly smaller.

Sampling bias is an issue for all surveys, which is why it is scientific best practice to report how the sample was selected, and to publish the raw data and calculations so that there is no ambiguity or uncertainty about what was done, or what the evidence for the conclusion is. It enables anyone else (in principle) to check the working, assumptions, conventions, limitations, and meaning. It helps catch errors faster. Plus the knowledge that someone else may be checking your working is an excellent motive for getting it right, and organising your thoughts and methods systematically. The best scientists take advantage of the extra safeguard on their own work this technique offers, and the enormous gain in credibility.

The worst scientists would rather delete their own raw data than let a critic examine it. After all, they might find something wrong with it...

If a survey publishes its method, you can make a judgement as to the confidence you can have in the result. No survey is perfect, but many are pretty good, and you the results are likely as accurate as they claim to be, near enough.

No doubt Dan's sampling methodology will be in his forthcoming paper - he's just teasing us with highlights so we'll read it. :-)

September 9, 2014 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>