follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« First things first: the science of normal (nonpathological) science communication | Main | Weekend update: Priceless »

"Now I'm here ... now I'm there ...": If you look, our dualistic identity-expressive/science-knowledge-acquiring selves go through only one slit

From correspondence with a thoughtful person: on the connection between the "toggling" of identity-expressive and science-knowledge-revealing/acquiring information processing & the "science communication measurement problem."

So tell me what you think of this:


I think it is a variant of [what Lewandowsky & Kirsner (2000) call] partitioning.

When the "according to climate scientists ..." prefix is present, the subjects access "knowledge of science"; when it is not, they access "identity-enabling knowledge" -- or some such.  

Why do I think that?

Well, as you know,  it's not easy to do, but it is possible to disentangle what people know from who they are on climate change with a carefully constructed climate-science literacy test.

Of course, most people aren't very good climate-science literacy test takers ("they can tell us what they know -- just not very well!"). The only people who are particularly good are those highest in science comprehension.

Yet consider this!

"WTF!," right?

I had figured the "person" who might help us the most to understand this sort of thing was the high science-comprehension "liberal/Democrat."

She was summoned, you see, because some people thought that the reason the high science-comprehension "conservative/republican"  "knows" climate change will cause flooding when the prefix is present yet "knows" it won't otherwise is that  he simply "disagrees" with climate scientists; b/c he knows they are corrupt, dishonest, stupid commies" & the like.

I don't think he'd say that, actually. But I've never been able to find him to ask...

So I "dialed" the high-science comprehension "liberal/democrat."

When you answer " 'false' " to " 'according to climate scientists,  nuclear generation contributes to global warming,'" I asked her, "are you thinking, 'But I know better--those corrupt, stupid, dishonest commies'  or the like?"

"Don't be ridiculous!," she said. "Of course climate scientists are right about that-- nuclear power doesn't emit CO2 or any other greenhouse gas. "  "Only an idiot," she added, "would see climate scientists as corrupt, stupid, dishonest etc."  A+!

So I asked her why, then, when we remove the prefix, she does say that nuclear power causes  global warming.

She replied: "Huh? What are you talking about?"

"Look," I said, "it's right here in the data: the 'liberal democrats' high enough in science comprehension to know that nuclear power doesn't cause global warming 'according to climate scientists' are the people most likely to answer 'true' to the statement 'nuclear power generation contributes to global warming' when one removes the 'according to climate scientists' prefix. "

"Weird," she replied.  "Who the hell are those people? For sure that's not me!"

Here's the point: if you look, the high-science comprehension "liberal/democrat" goes through only one slit. 

If you say, "according to climate scientists," you see only her very proficient science-knowledge acquirer self.

But now take the prefix away and "dial her up" again, and you see someone else--or maybe just someone's other self.

"That's a bogus question," she insists. "Nuclear power definitely causes global warming; just think a bit harder-- all the cement . . . .  Hey, you are a shill for the nuclear industry, aren't you!"



... She has been forced to be her (very proficient) identity-protective self.

And so are we all by the deformed political discourse of climate change ...

"Here I stand . . . "


Lewandowsky, S. & Kirsner, K. Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition 28, 295-305 (2000).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (9)

Q1. Does the position of the planets in the solar system affect everyday events in our personal lives?

Q2. According to astrologers does the position of the planets in the solar system affect everyday events in our personal lives?


""That's a bogus question," she insists. "Nuclear power definitely causes global warming; just think a bit harder-- all the cement ..."

Well yes. Precisely. She's right.

Nuclear power emits less CO2 than coal, but not none. The strict and only scientifically accurate answer to the question asked (as opposed to the question intended) is 'true'. Someone motivated to examine the issue more thoroughly is more likely to realise this, and will be more inclined to be perverse and take the question literally. What's more interesting is that even knowing the right answer, their opinion of scientists is such that they'll still expect a scientist to give the expected answer rather than the right answer. That doesn't say a lot for scientists' reputation these days.

Personally, I'd expect a real scientist to give the right answer on both your questions, which is to say "it's more complicated than that", unless they're in very strict soundbite territory - or are facing a badly designed multiple choice questionnaire. But I don't think my expectations are very typical.

For which reason, I'd suggest labeling your axes "probability of 'true' response" rather than "correct response", as the latter prejudges a complicated/controversial issue, and potentially distracts from your main point. By labeling one position as 'correct' you're triggering people's partisan responses to the test question and turning half your audience off your paper. That's surely the precise opposite of what you recommend we do when communicating science, yes?

April 12, 2016 | Unregistered CommenterNiV


For which "nuclear power" question is the correct answer "true"? The one w/ the prefix or without?

April 12, 2016 | Registered CommenterDan Kahan

To give her the benefit of the doubt -- maybe the prefix causes her to think of the public statements climate scientists make, in which the detail regarding cement does not matter. So she may be right, and believe the scientists also know she is right, but also be reporting accurately on what they say, which is how she interprets "according to..."

But I don't really buy it. The cement issue is SO tiny, on a per-joule basis. Test: will she also give the same answer for hydro? Dams are often made of cement... And solar probably releases just as much in panel manufacture.

April 13, 2016 | Unregistered CommenterAn Igyt

"For which "nuclear power" question is the correct answer "true"? The one w/ the prefix or without?"


Nuclear power *does* contribute to global warming, albeit to a much smaller degree than coal, oil, or gas power stations. The answer to the question without the prefix is 'true'. (This was what I was talking about above.)

Some climate scientists will be pedantic and correctly say that it does, and other climate scientists will consider both alternatives offered to be misleading, and will offer the technically incorrect but in practice less misleading answer that it doesn't. (And I'm sure some scientists will be opining on a topic outside their area of expertise and simply be wrong.) Since there are scientists who would say it would, the answer to the question with the prefix is also true. Had you instead asked the question of whether scientists said that nuclear power *didn't* contribute, the answer to that would be 'true' as well. When you don't specify whether you mean "some scientists" or "all scientists", the common assumption is that "some" is implied. Had you asked "do *all* climate scientists say..." the answer would be 'false', because some climate scientists don't. In this case, you probably meant 'most' - for which, as we've said before, you need a survey. I don't actually know the answer to that one.

The practical point of the question is to identify policies to *reduce* emissions, and switching to nuclear would do so. Everyone understands the political context, and so would know what answer was expected to support or oppose the political claim behind it. This contextual interpretation and translation of language is so smooth and automatic that most people don't even realise they're doing it. It takes considerable practice for scientists to learn to *stop* doing it, and to see and answer the question that is *actually there*, not the one they think ought to have been asked. It's all about precision in language.

When people "lie with statistics", this is mostly what they're doing. They make statements that are mathematically and technically true, but which they know people will interpret in a way that leads them to a false conclusion. People tend to see what they expect to see, and fill in the gaps with what they already 'know'; but the most interesting bits of nature are often counter-intuitive, and scientists learn more if they can find a way to see only what's really there. It's really hard.

It's confusing. But it's also important. Imprecise language frequently leads to bad science.

April 13, 2016 | Unregistered CommenterNiV


If the answer is "true" to both, then the high OSI liberal/democrat shouldn't change her answer depending onwhether the prefix is tehre.

Also, if the ansewr is "true," then there is somethoing really messed up w/ the high OSI "conservative republican," since he is more likely to say "true" than anyone else, w/ or w/o the prefix.

The test is an experimental device for testing conjectures on how the experience of doing a "test" interacts w/ identity-expressive information processing. The conjectures are associated with predictions about how the prefix will *interact* w/ political outlooks & differences in science comprehension.

Your points disregard both the conjectures & the evidence that the figures supply in relation to them. They really just aren't engaging w/ the post.

For my puropses, it doesn't matter what the right answer is. But given that those highest in OSI pretty decidedly disagree w/ you & given that ut in any case, the IPPC 5th Assessment says that "[t]he electricity from nuclear power does not contribute to direct GHG emissions" (Mitigation of Climate Change vol., Ch. 7, p. 535), I think it would be a pretty big mistake for anyone who wants to construct a valid climate science literacy assessment test to treat "true" as the answer that someone who actually understands climate science would be more likely to give.

April 15, 2016 | Registered CommenterDan Kahan


I agree w/ you.

The question refers to "power generation" precisely to focus on operating plants & not building them in any case.

And any answer that makes the high OSI liberal/Democrat "right" b/c she is so darn high in OSI has to then explain why the high OSI conservative/Republican is so darn wrong.

The intricacy of the scheme of ad hoc explanations that have to keep being invented to explain the pattern of results on grounds other than the exceedingly simple one that infromed the design of this study (and the one before that) and that predicted these results would make Ptolemy blush.

April 15, 2016 | Registered CommenterDan Kahan

"If the answer is "true" to both, then the high OSI liberal/democrat shouldn't change her answer depending on whether the prefix is there."

Why not? Are you assuming that her intention is to give the correct answer?

My point is that the technically correct answer is "true", but in the easily recognisable political context it's obvious to subjects that they surveyor actually *intended* a different question. Cooperative subjects automatically substitute the question you *intended* in place of the one *written on the page*. Uncooperative ones with the technical knowledge to be able to do so take the opportunity to mess up your expectations by correctly answering the question you actually asked.

The critical issue here is that the question can by the methods naturally use for parsing language be interpreted in different ways that depend directly on the political context. People of different political belief systems will interpret it in different ways - both as a result of coming to different understandings, and, as in this case, by making different decisions in the face of the same understanding.

It's a problem the AI community researching natural language processing ran into. Most human speech is highly ambiguous and incomplete - it provides just enough clues that with the anticipated context another human can deduce what was actually meant, but without an extensive database of background knowledge about the way the world works, how people think, the sort of questions people ask or don't ask, and more, is utterly incomprehensible. We do it so easily and automatically that we mostly don't even realise we're doing it, which is why the AI researchers were so surprised to discover how hard a problem it is.

"Also, if the answer is "true," then there is something really messed up w/ the high OSI "conservative republican," since he is more likely to say "true" than anyone else, w/ or w/o the prefix."

The reverse effect happens here. The Republican is also well aware of what question you really intended, and as I did, a high-OSI example would likely spot the ambiguity. But in this case, the political motivation is now to pick the intended interpretation because it messes with the Democrat political position. Republicans like nuclear power just on general principles, so if they can get it implemented using global warming as a justification (and get Democrat heads to explode with cognitive dissonance as a bonus!), they'll do that, even without believing in the dangers of global warming.

"The test is an experimental device for testing conjectures on how the experience of doing a "test" interacts w/ identity-expressive information processing"

It's fine as a test of how people will answer questions - my issue with it is that you keep on using it as a tool to identify what they *believe*.

Your high-OSI Democrat is well-aware that you actually intended to ask whether nuclear power contributes less to global warming than the alternative methods of power generation, is well aware that it does, but knows that answering the question you intended will be interpreted (in the political context) as support for nuclear power on global warming grounds, when she wants to oppose it on safety/pollution/non-proliferation grounds. You don't give her an opportunity to explain her reservations and caveats, and so to avoid you misusing her answer for political purposes she opposes, she takes advantage of the ambiguity. She does, however, know that many climate scientists support nuclear on global warming grounds and will interpret the question as intended, and so answers accordingly when the prefix is applied. However, her response is still in accord with her beliefs, which have not changed.

People can (sometimes genuinely, sometimes deliberately) interpret a question differently. People can make different choices as to what they think *you* intend, depending on what they know or assume about you. People can deliberately lie about their beliefs to manipulate a survey they know is going to be used politically. Or they can deliberately lie about their beliefs to avoid embarrassment or confrontation, or adverse effects on their personal or professional lives. Or just to make themselves seem like a nicer person. People are complex.

From the point of view of showing that conformity with politics affects people's responses to surveys, the difference is immaterial. It doesn't matter whether the conflict changes their beliefs or merely their choice of interpretation. But from the point of view of speculating about mechanisms, these sorts of issues matter.

"For my purposes, it doesn't matter what the right answer is."

Sure. But what *does* matter is whether the question is ambiguous, and whether the ambiguity plays into the actual positions and belief systems people hold. You have to know a bit about the subject (or know people who do) to construct good questions.

Multiple-choice survey questions where *none* of the options offered express your position accurately, and where there is no opportunity given to clarify, are very frustrating! As ever, it would make it a heck of a lot easier to resolve these sorts of disputes if there was routinely a text box next to the multiple choice asking the subject to explain their reasoning. ("Show your working!") Self-justifications cannot, of course, be entirely trusted, but it would give much more material with which to construct a better survey, and to generate better hypotheses about mechanisms. If it becomes apparent that most people were not interpreting a question as the experimenter expected, the results for that question can be discounted or modified appropriately. I can only assume the reason experimenters don't do it is that it would result in a lot more work.

April 17, 2016 | Unregistered CommenterNiV


So the answer is ... you agree with me: he questions measure different things: expressions of identity vs. knowldge of what science knows.


April 18, 2016 | Registered CommenterDan Kahan

"So the answer is ... you agree with me: he questions measure different things: expressions of identity vs. knowldge of what science knows."

Yes, the questions measure different things. I think it's still speculation in the absence of data to say what those things are. Nor do I think it's necessarily the same thing for every question of the type.

I don't think the "scientist say..." prefix necessarily measures what *science* knows, it only measures what the current fashion among scientists is. I think that when people give different answers depending on the prefix, it's often because they think scientists are wrong about the science. In the case of the nuclear power question, the high-OSI liberal believes that *science knows* that nuclear power generation contributes to global warming, but believes that *scientists say* it doesn't because they're trying to keep the message simple, and want to persuade the public to use nuclear power.

Likewise, I think *both* questions - with and without the prefix - can be affected by identity. People's beliefs about "what science knows" can be affected by their identity-based priors and background context, but so can their assessment of what "scientists say". For example, the classic '97%' statistic is credulously believed by people of one political persuasion, but more frequently disbelieved by those of the other. (Even if low-OSI conservatives don't know the actual figure is about 80-85%, I would expect them to be sceptical about the most extreme claims of consensus).

People's answers to questions, even on scientific topics, are obviously affected by their politics. Sometimes this may reflect a difference in belief. Sometimes it may reflect a difference in interpretation. I think that people of minority beliefs are usually well-aware of when a majority of scientific opinion is against them - but I find it implausible that very many of them are so mentally inconsistent as to believe the scientists are right while simultaneously supporting a different position themselves, on purely ideological grounds. (Although beliefs can certainly vary depending on context - a physicist can believe in Newtonian gravity when doing simple orbital calculations but disbelieve in it when doing relativistic ones.) I do think in at least *some* cases that people will lie about their true beliefs in order to fit in ideologically with their bit of society, but I don't believe this applies to *all* cases of a conflict between what people say and what they know scientists say. Especially given the persecution some people face from society for holding to their non-mainstream beliefs. Contrarians by definition aren't doing it to fit in.

I also agree with you that these are interesting results that need an explanation. I just think more research is needed to pin down what that explanation is. People are more complex and complicated.

April 19, 2016 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>