follow CCP

Recent blog entries
« Cultural cognition is not a bias -- and the corruption of it is no laughing matter! | Main | What does the science of science communication say about communicating & expanding interest in noncontroversial but just really really cool science? »
Wednesday
Dec052012

WSMD? JA!, episode 3: It turns out that Independents are as just partisan in cognition as Democrats & Republicans after all!

This is the third episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

Okay, so I was all freaked out by the discovery that Independents are more reflective, in terms of the Cognitive Reflection Test scores, than partisans and wondering if this signified that somehow that Independents are these magical creatures who don't become even more vulnerable to ideologically motivated reasoning as their disposition to engage in analytical, System 2 reasoning becomes more pronounced (one of the findings of the latest CCP study).

Enticed by my promise to share the Nobel Prize in whatever 4 or 5 disciplines would surely award it to us for unravelling  this cosmic mystery, Isabel Penraeth (aka "Gemeinshaft Girl) and NiV (aka "NiV") told me to just calm down and use some System 2 thinking. Did it ever occur to you, NiV asked with barely concealed exasperation, that the problem might be that Independents are members of a "cultural in-group" that evades the dopey 1-dimensional left-right measure used in the study? Yes, you fool, added Geminschaft Girl, have you even botherd to see whether Independents behave at all differently from Partisans (let's use that term for those who identify as either Republicans or Democrats) when their worldviews are measured with the "CCR group-grid scale?"

Doh! Of course, this is the right way to figure out if there's really any difference in how Independents and Partisans process information. 

The basic hypothesis of the study was that ideologically motivated reasoning is a consequence of a kind of "identity-protective cognition" that reflects the stake people have in forming perceptions of risk and other policy-relevant facts consistent with the ones that predominate in important affinity groups.

This is actually the core idea behind cultural cognition generally. Usually, too -- as in always before now, really-- our studies have used "cultural worldview" scales, derived from the "group-grid framework of Mary Douglas, to measure the motivating group commitments that we hypothesized drive identity-protective cognition on climate change, gun control, nuclear power, the HPV vaccine, Rock 'n Roll vs. Country, & like issues.

We do that, I've explained, because we think the cultural worldview measures are better than left-right measures. They are more discerning of variations in the outlooks of ordinary, nonpartisan folk, and thus do a better job of locating the source and magnitude of cultural divisions on risk issues.

The reason I used right-left in the most recent study was that I wanted to maximize engagement with the researchers whose interesting ideas motivated me to conduct it. These included the Neo–Authoritarian Personality scholars, whose work is expertly synthesized in Chris Mooney's Republican Brain. They all use right-left measures, which, like I said, I don't think are as good as cultural-worldview ones but are (as I've explained before) plausibly viewed as alternative indicators of the same latent motivating predispositions.

So for crying out loud, why not just see how Independents compare with Partisans when instead of right-left ideology cultural worldviews are used as the predictor in the motivated-reasoning experiment described in the study?! Of course, I have the data on the subjects cultural worldviews; like the participants in all of our studies, they were part of a large, nationally diverse subject pool recruited to take part in cultural cognition studies generally.

As I'm sure you all remember vividly, the experiment tested whether subjects would show motivated reasoning in assessing evidence of the "validity" of Shane "No limit video poker world champion" Frederick's gold-standard "System 1 vs. System 2" Cognitive Reflection Test.  Subjects were assigned to one of three conditions: (1) a control group, whose members were told simply that psychologists view CRT as valid test of open-mindedness and reflection; (2) a "skeptic-is-biased" condition, whose members were told in addition that "climate skeptics" tend to get lower CRT scores (i.e., are more closed-minded and unreflective); and (3) a "nonskeptic-is-biased" condition, whose members were told that "climate believers" get lower scores (i.e., are more closed-minded and unreflective). 

As hypothesized, subjects polarized along ideological lines in patterns that reflected their disposition to fit their assessment of scientific informtion--here on a test that measures open-mindedness and reflection--to their ideological commitments. So relative to their counterparts in the control, more liberal, Democratic subjects were more likely to deem the CRT valid, and more conservative Republican ones to deem it invalid, in the "skeptic-is-biased" condition; these positions were flipped in the "nonskeptic-is-biased condition." Moreveover, this effect was magnified by subjects' scores on the CRT test--i.e., the more disposed they are to use analytical rather than heuristic-driven reasoning, the more prone subjects are to ideologically motivated reasoning.  

Necessarily, though, Independents, didn't show such an effect (how could they, logically speaking? they aren't left or right to a meaningful degree) and they happened to score a bit higher than Partisans (Dems or Repubs) on CRT. Hmmmm....

But Independents, just like Democrats and Republicans, have cultural outlooks. So I reanalyzed the study data using the cultural cognitoin "Hierarchy-egaltiarian" and "Individualist-communitarian" worldview scales.

Because climate change is an issue that tends to divide Hierarch Individualists (HIs) and Egalitarian Communitarians (ECs), my principal hypotheses were (1) that HIs and ECs would display motivated reasoning effects equivalent to the ones of conservative Republicans and liberal Democrats, respectively, and (2) that this effect would increase as subjects CRT reflectiveness scores increased. The competiting additional hypotheses: (3a) that Independents wouldn't behave any differently in this respect than Partisans; and (3b) that Independents would be shown to be magic, supherhuman (possibly outerspace alien) beings who are immune to motivated cognition.

I had my money (a $10,000 bet made w/ Willard, a super rich guy who doesn't pay any income taxes) on 3a. Independents, like Democrats and Republicans, have cultural worldviews; why wouldn't they be motivated to protect their cultural identities just like everyone else?

Results? Hypotheses (1) and (2) were confirmed. When I just looked at subjects defined in terms of their worldviews, I observed the expected pattern of polarization. Indeed, HIs and ECs reacted in an even more forcefully polarizing manner to the experimental manipulation than did conservative Republicans and liberal Democrats, an effect that should come as no surprise because the culture measures are indeed better--i.e., more discerning--measures of the group dispositions that motivated biased processing of information on risk and other policy-relevant facts.

Next, I compared the size of this culturally motivated reasoning effect for Partisans and Independents, respectively.  The regression model that added the appropriate variables for being an Independent did add explanatory power relative to the model that pooled Indepedents and Partisans. But the effect was associated almost entirely with the tendency of Independents to polarize more forcefully in the "skeptic-is-biased" condition. The same basic pattern--HIs and ECs polarizing in the expected ways, and magnification of that effect by higher CRT scores--obtained among both Partisans and Independents.

 You can see that there are some small differences, ones that reflect the relationship I described between being an Independent and being assigned to the "skeptic-is-biased" condition.  But I myself don't view these differences as particularly meaningful; when you start to slice & dice, you'll always see something, so if it wasn't something you were looking for on the basis of a sensible hypothesis, more than likely you are looking at noise.

So I say this is corroboration of hypothesis (3a): Independents are just as partisan in their assessment of information that threatens their cultural identities as political Partisans. I'm done being freaked out!

But hey, if you disagree, tell me! Come up with an interesting hypothesis about how Independents are "different" & I'll test it with our data, if I can, in another episode of WSMD? JA!

 WSMD? JA! episode 1

WSMD? JA! episode 2

Ideology, Motivated Reasoning, and Cognitive Reflection, CCP Working Paper 107 (Nov. 29, 2012)

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

That is a satisfying analysis. The world is again wobbling properly on its slightly tilted axis :)

Many thanks!

December 5, 2012 | Unregistered CommenterIsabel Penraeth

Thanks. That's interesting.

I do have a question. (Another one!) You may have answered this before, if so I apologise for being dense. But I recall noting it previously and wondered how you made the particular step.

You say "As hypothesized, subjects polarized along ideological lines in patterns that reflected their disposition to fit their assessment of scientific information [...] to their ideological commitments." Which I agree the experiment shows. But then you go on to phrase it as "[...] the more prone subjects are to ideologically motivated reasoning." and "[...] motivated to protect their cultural identities [...]" which I'm not so sure about, since the experiment doesn't actually analyse their motives. It simply notes a correlation of conclusions.

To explain why I think the distinction matters, I'll give another example of an experiment. The subjects are shown an unusual type of pocket calculator, perhaps one of the old mechanical ones, or a slide rule if they're young enough not to have seen one. The control group is told that scientists consider it to be accurate. In addition to this, group A is shown the calculator performing the calculation 6x7=42, while group B is shown the calculator performing the calculation 6x7=59.

How will the three groups assess the "validity" of this unusual calculator?

All three groups have a prior belief in the multiplication table. One group is merely told that it works, so it is a question there of their trust in scientists. Group A sees a confirming instance where the calculator get's it right. Their confidence in the calculator's functionality is boosted. We can ask whether low or high CRT scorers boost it more. Group B on the other hand see a disconfirmation of the claim, and their confidence in the calculator's functioning drops. And we find that for high CRT scorers it drops a lot more.

As hypothesized, subjects polarized along ideological lines (if mathematics may be considered an ideology) in patterns that reflected their disposition to fit their assessment of scientific information to their prior mathematical commitments. But are they motivated by their support for the multiplication tables, or their need to protect their 'mathematically competent' identities? Do they think the multiplication tables are under any sort of threat they need to be protected against?

I can see how some people might be, but I think most wouldn't be motivated by any sort of defensiveness for arithmetic. They're just straightforwardly processing a set of beliefs founded on extensive personally-observed evidence against the unsupported claims of unknown scientists, and some crazy gadget giving an obviously wrong answer.

And what's more, you won't shift them. It doesn't matter how eminent the scientists are, how weighty their CVs, it doesn't matter how many societies make public statements in support of the new calculator, or how often. It doesn't matter how they simplify the explanation, with colourful pictures. It makes no difference if governments and celebrities speak in support of it. It doesn't matter if they're told 97% of computer scientists all support the new calculator, who are you to doubt them? It doesn't matter how you 'emotionally connect' to them, or appeal to their self-interest. Campaigners can organise stunts and protests to 'raise awareness', tearful children with sad-eyed kittens can implore on TV for them to change their minds, it will make no difference. They will deny that 6x7 can be 59.

The only thing that will even give them pause is if you put six stacks of seven blocks in front of them and let them count all 59 of them. And even then, they'll argue and ask questions, as they try to figure out if and why and how it all went wrong, and what other beliefs they now have to change.

The point is, it might not be not a matter of bending your reasoning to fit the belief system because that feels more comfortable. It might be a matter of comparing the quality of evidence. In this experiment, we have the claims of scientists (and you experimenters reporting them) being set against prior beliefs built up over a long time. How do we know it isn't just that people weight the evidence they already have more highly?

It seems like a difficult thing to test. One possibility might be to pick a neutral subject they don't know anything about, and tell them that their cultural group supports it (or that the other side does). Then tell them that the test shows supporters to be less reflective. Now they're not comparing the scientists against their own pre-existing knowledge, only against their cultural loyalties. Would that work?

One might also try to structure it more like your 'protest video' experiment, in which I think you did a good job of eliminating differences in evidence to isolate the cultural influence. Or of course you are welcome to ignore my quibbles and move on. I'm just enjoying myself spinning hypotheses here - I have no expectations that you'll take any of them up, even with your special WSMD JA feature. :-)

December 7, 2012 | Unregistered CommenterNiV

@Niv.

Well. I guess I should say thank you. Thank you for revealing yourself to be part of the satanic conspiracy whose aim is to drive me insane by convincing me that endogeneity between priors & likelihood ratio is "just fine"! You & Maggie Wittlin & Jay Koehler who else?! You will not succeed!

Actually, I think your comment is only part about that issue (which obsesses & haunts me b/c evil members of the PO <-> LR conspiracy keep tempting me with their sophistical arguments; Rev Bayes warns us that the devil will quote scriptute!).

I think you the two other issues you raise are:

a. If one's prior probability is 100%, then obviously new information inconsistent with the priors will be treated as having a likelihood ratio of 1. We know 6x7=42. It is an analytical statement, so of course gets "prior" probability of 100% and is properly used to validate the performance of any process for doing multiplication; also to test the expertise of anyone in multiplication. But if the subjects in the experiment are treating the proposition "Republicans [Democrats] are closed-minded" as having probability 1, then they are genuinely idiots; that's not an analytical, but rather an empirical (or "synthetic," as the analytical philosophers would say) proposition. No one whose goal is to think & understand ever treats any empirical proposition as having a probability of 100%. I don't think the subjects in the epxeriment could plausibly be understood to be taking that position; if they were, they would agree when you asked them "No evidence that I or anyone else could show you would change your mind, right?"

b. The other issue is internal validity of the design. I get an experimental result that fits a hypothesis: people are conforming the LR to beliefs that are ideologically congenial, thereby supporting the inference that people motivated to process informatoin in a manner that is identity protective. But all experiment results are underdetermined; that is, there is *always* some other possible explanation -- always! The strength of the inference from corroboration of my hypothesis thus depends on considerations outside the experimental design itself --on a theory, really, of how the world works. Give me the theory that fits my results & that is different from mine, and we can discuss, on the basis of considertions (necessarily) independent of this experiment, whose theory for that result is more plausible. And then after we've done that for a while, we must stop; b/c as I said in the post, rather than argue until blue in the face about "something else could have ..." responses to experimental results, the more profitable thing to do is do another experiment designed as tightly as possible to test a hypothesis that fits the explanation one has proposed and not others & see what happens. Then do that again. And again. And again and again. If those results all converge, that's the best one can do. Of course, you know that *this* experiment is one of the "And again & again again" ones in the CCP project.

December 9, 2012 | Registered CommenterDan Kahan

Thanks. Glad to see the question had been asked before. I had a vague recollection it had, but couldn't remember the context or what answer was given.

On a), the example I picked was for a more certain situation than in the experiment, just to get across the point that people's beliefs are based (partially) on evidence that could be stronger than the evidence you give them in the experiment. It wasn't my intention to bring up the issue of 100% priors. My point was that it's unclear whether it is the evidence you give them in this experiment or the evidence in their previous experiences they are assessing in the ideologically biased way.

On b), yes, I think that was my main point. Eliminating *all* the alternatives is what makes science so hard to do. But if you're aware of the limitations that puts on our confidence in the result, that answers my question satisfactorily. I will have another look through the earlier discussion. Thanks.

December 9, 2012 | Unregistered CommenterNiV

The sense-making sentence "No one whose goal is to think & understand ever treats any empirical proposition as having a probability of 100%" suggests that people who profess to believe in God with 100% certainty either do not seek to think and understand, or do not categorise the claim that God exists as an empirical one. Nevertheless, there are people, even scientists, who are clearly driven to think and understand, and who must consider the assertion that God exists to be an empirical claim, who would happily assert that they are 100% certain that God exists. Admittedly, some use the logically indefensible yet broadly socially acceptable notion of "faith" as a recourse to avoid having to defend this unsupported assertion. However, some WILL readily engage in argument to defend their beliefs, giving the impression that they have based their beliefs on reason and evidence. It is this latter camp of believers (i.e. the most "sophisticated") who I think with which it might be pertinent to draw parallels with hard line climate change ideologues. Perhaps for some, the sort of mental gymnastics (I should probably say "cognitive processes" to avoid littering this post with my personal views, but I won't, damn it!) that allow a thinking person to confidently assert belief in God with 100% conviction might equally allow a person to hold a belief in the non-reality or reality of Climate change (and/or it's anthropogenic causes) with 100% conviction. I doubt that people would go as far as to agree with the sentence, "no evidence that I or anyone else could show you would change your mind, right" because unlike with religious belief, it is not socially acceptable to admit to 100% conviction in areas of dispute where one can recognise that such certainty can never be TECHNICALLY warranted. They may, nevertheless, BE 100% certain (or very close to it). The point I'm making is that it is at least plausible that people actually do have a prior probability of 100% (or at least VERY close to 100%) with respect to the factual basis of climate change, and I would argue that the phenomenon of religious conviction shows that this is feasible. Of course, this is pure speculation but I would like to think that I have made some small contribution to this discussion! Anyway, Christmas party time...

December 23, 2012 | Unregistered CommenterJoshua Lord

@Joshua:
Would be awfully nice to know how Rev. Bayes would have handled that one

December 23, 2012 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>