follow CCP

Recent blog entries
« Some experimental data on CRT, ideology, and motivated reasoning (probing Mooney's Republican Brain) | Main | Gun control, climate change & motivated cognition of "scientific consensus" »
Friday
Jul272012

What do I think of Mooney's "Republican Brain"?

Everyone knows that science journalist Chris Mooney has written a book entitled The Republican Brain. In it, he synthesizes a wealth of social science studies in support of the conclusion that having a conservative political outlook is associated with lack of reflection and closed-mindedness.

I read it. And I liked it a lot.

Mooney possess the signature craft skills of a first-rate science journalist, including the intelligence (and sheer determination) necessary to critically engage all manner of technical material, and the expositional skill required to simultaneously educate and entertain.

He’s also diligent and fair minded. 

And of course he’s spirited: he has a point of view plus a strong desire to persuade—features that for me make the experience of reading Mooney’s articles and books a lot of fun, whether I agree with his conclusions (as often I do) or not.

As it turns out, I don’t feel persuaded of the central thesis of The Republican Brain. That is, I’m not convinced that the mass of studies that it draws on supports the inference that Republicans/conservatives reason in a manner that is different from and less reasoned than Democrats/liberals.

The problem, though, is with the studies, not Mooney’s synthesis.  Indeed, Mooney’s account of the studies enabled me to form a keener sense of exactly what I think the defects are in this body of work. That’s a testament to how good he is at what he does.

In this, the first of two (additional; this issue is impossible to get away from) posts, I’m going to discuss what I think the shortcomings in these studies are. In the next post, I’ll present some results from a new study of my own, the design of which was informed by this evaluation.

1. Validity of quality-of-reasoning measures

The studies Mooney assembles are not all of a piece but the ones that play the largest role in the book and in the literature correlate ideology or party affiliation with one or another measure of cognitive processing and conclude that conservativism is associated with “lower” quality reasoning or closed-mindedness.

These measures, though, are of questionable validity. Many are based on self-reporting; "need for cognition," for example, literally just asks people whether the "notion of thinking abstractly is appealing to" them, etc. Others use various personality-style constructs like “authoritarian” personality that researchers believe are associated with dogmatism. Evidence that these sorts of scales actually measure what they say is spare.

Objective measures—ones that measure performance on specific cognitive tasks—are much better. The best  of these, in my view, are the “cognitive reflection test” (CRT) which measures the disposition to check intuition with conscious analytica reasoning, and “numeracy,” which measures quantatative reasoning capacity, and includes CRT as a subcomponent.

These measures have been validated. That is, they have been shown to predict—very strongly—the disposition of people either to fall prey to or avoid one or another form of cognitive bias. 

As far as I know, CRT and numeracy don’t correlate in any clear way with ideology, cultural predispositions, or the like. Indeed, I myself have collected evidence showing they don’t (and have talked with other researchers who report the same).

2. Relationship between quality-of-reasoning measures and motivated cognition

Another problem: it’s not clear that the sorts of things that even a valid measure of reasoning quality gets at have any bearing on the phenomenon Mooney is trying to explain. 

That phenomenon, I take it, is the persistence of cultural or ideological conflict over risks and other facts that admit of scientific evidence. Even if those quality-of-reasoning measures that figure in the studies Mooney cites are in fact valid, I don’t think they furnish any strong basis for inferring anything about the source of controversy over policy-relevant science. 

Mooney believes, as do I, that such conflicts are likely the product of motivated reasoning—which refers to the tendency of people to fit their assessment of information (not just scientific evidence, but argument strength, source credibility, etc.) to some end or goal extrinsic to forming accurate beliefs. The end or goal in question here is promotion of one’s ideology or perhaps securing of one’s connection to others who share it.

There’s no convincing evidence I know of that the sorts of defects in cognition measured by quality of reasoning measures (of any sort) predict individuals’ vulnerability to motivated reasoning.

Indeed, there is strong evidence that motivated reasoning can infect or bias higher level processing—analytical or systematic, as it has been called traditionally; or “System 2” in Kahneman’s adaptation—as well as lower-level, heuristic or “System 1” reasoning.

We aren’t the only researchers who have demonstrated this, but we did in fact find evidence supporting this conclusion in our recent Nature Climate Change study. That study found that cultural polarization—the signature of motivated reasoning here—is actually greatest among persons who are highest in numeracy and scientific literacy. Such individuals, we concluded, are using their greater facility in reasoning to nail down even more tightly the connection between their beliefs and their cultural predispositions or identities.

So, even if it were the case that liberals or Democrats scored “higher” on quality of reasoning measures, there’s no evidence to think they would be immune from motivated reasoning. Indeed, they might just be even more disposed to use it and use it effectively (although I myself doubt that this is true; as I’ve explained previously, I think ideologically motivated reasoning is uniform across cultural and ideological types.)

3. Internal validity of motivated reasoning/biased assimilation experiments

The way to figure out whether motivated reasoning is correlated with ideology or culture is with experiments. There are some out there, and Mooney mentions a few.  But I don’t think those studies are appropriately designed to measure asymmetry of motivated reasoning; indeed I think many of them are just not well designed period.

A common design simply measures whether people with one or another ideology or perhaps existing commitment to a position change their minds when shown new evidence. If they don’t—and if in fact, the participants form different views on the persuasiveness of the evidence—this is counted as evidence of motivated reasoning.

Well, it really isn’t. People can form different views of evidence without engaging in motivated reasoning. Indeed, their different assessments of the evidence might explain why they are coming into the experiment in question with different beliefs.  The study results, in that case, would be showing only that people who’ve already considered evidence and reached a result don’t change their mind when you ask them to do it again. So what?

Sometimes studies designed in this way, however, do show that “one side” budges more in the face of evidence that contradicts their position (on nuclear power, say) than the other does on that issue or on some other (say, climate change).

Well, again, this is not evidence that the one that’s holding fast is engaged in motivated reasoning. Again, those on that side might have already considered the evidence in question and rejected it; they might be wrong to reject it, but because we don’t know why they rejected it earlier, their disposition to reach the same conclusion again does not show they are engaged in motivated reasoning, which consists in a disposition to attend to information in a selective and biased fashion oriented to supporting one’s ideology.

Indeed, the evidence that challenges the position of the side that isn’t budging in such an experiment might in fact be weaker than the evidence that is moving the other side to reconsider. The design doesn’t rule this out—so the only basis for inferring that motivated reasoning is at work is whatever assumptions one started with, which gain no additional support from the study results themselves.

There is, in my view, only one compelling way to test the hypothesis that motivated reasoning explains the evaluation of information. That’s to experimentally manipulate the ideological (or cultural) implications of the information or evidence that subjects are being exposed to. If they credit that evidence when doing so is culturally/ideologically congenial, and dismiss it when doing so is ideologically uncongenial, then you know that they are fitting their assessment of information (the likelihood ratio they assign to it, in Bayesian terms) to their cultural or ideological predispositions.

CCP has done studies like that. In one, e.g., we showed that individuals who watched a video of protestors reported perceiving them to be engaged in intimidating behavior—blocking, obstructing, shouting in onlookers’ faces, etc.—when the subjects believed the protest involved a cause (either opposition to abortion rights or objection to the exclusion of gays and lesbians from the military) that was hostile to their own values. If the subjects were told the protestors’ cause was one that affirmed the subjects' own values, then they saw the protestors as engaged in peaceful, persuasive advocacy.

That’s motivated reasoning.  One and the same piece of evidence—videotaped behavior of political protests—was seen one way or another (assigned a likelihood ratio different from or equal to 1) depending on the cultural congeniality of seeing it that way.

In another study, we found that subjects engage in motivated reasoning when assessing the expertise of scientists on disputed risk issues. In that one, how likely subjects were to recognize a scientist as an “expert” on climate change, gun control, or nuclear power depended on the position that scientist was represented to be taking. We manipulated that—while holding the qualifications of the scientist, including his membership in the National Academy of Sciences, constant.

Motivated reasoning is unambiguously at work when one credits or discredits the same piece evidence depending on whether it supports or contradicts a conclusion that one finds ideologically appealing. And again we saw that process of opportunistic, closed-minded assessment of evidence at work across cultural and ideological groups.

Actually, CM discusses this second study in his book. He notes that the effect size—the degree to which individuals selectively afforded or denied weight to the view of the featured scientist depending on the scientists’ position—was larger in individuals who subscribed to a hierarchical individualistic worldviews (they tend to be more conservative) than in individuals who subscribed to an egalitarian, communitarian one. The former tend to be more conservative, the latter more liberal.

As elsewhere in the book, he was reporting with perfect accuracy here.

Nevertheless, I myself don’t view the study as supporting any particular inference that conservatives or Republicans are more prone to motivated reasoning. Both sides (as it were) displayed motivated reasoning—plenty of it. What’s more, the measures we used didn’t allow us to assess the significance of any difference in the degree of it that each side displayed. Finally, we’ve done other studies, including the one involving the videotape of the protestors, in which the effect sizes were clearly comparable in size.

But here’s the point: to be a valid, a study that finds asymmetry in ideologically motivated reasoning must allow the researcher both to conclude that subjects are selectively crediting or discrediting evidence conditional on its congruence with their cultural values or ideology and that one side is doing that to a degree that is both statistically and practically more pronounced than the other.

Studies that don’t do that might do other things--like supply occasions for sneers and self-congratulatory pats on the back among those who treat cheering for "their" poilitical ideology as akin to rooting for their favorite professional sports team (I know Mooney certainly doesn’t do that).

But they don’t tell us anything about the source of our democracy’s disagreements about various forms of policy-relevant science.

In the next post in this “series,” I’ll present some evidence that I think does help to sort out whether an ideologically uneven propensity to engage in ideologically motivated reasoning is the real culprit. 

Posts 2 & 3

 References

Chen, Serena, Kimberly Duckworth, and Shelly Chaiken. Motivated Heuristic and Systematic Processing. Psychological Inquiry 10, no. 1 (1999): 44-49.

Frederick, Shane. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, no. 4 (2005): 25-42.

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change advance online publication (2012).

Liberali, Jordana M., Valerie F. Reyna, Sarah Furlan, Lilian M. Stein, and Seth T. Pardo. "Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment." Journal of Behavioral Decision Making  (2011):advance on line publication.

Mooney, C. The Republican Brain: The Science of Why They Deny Science—and Reality. (John Wiley & Sons, Hoboken, NJ; 2012).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

Weller, J.A., Dieckmann, N.F., Tusler, M., Mertz, C.K., Burns, W.J. & Peters, E. Development and Testing of an Abbreviated Numeracy Scale: A Rasch Analysis Approach. Journal of Behavioral Decision Making, advance on line publication (2012).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Fascinating. Thank you for bringing this information to my attention.

I do have some questions.
For instance in the "protesters" experiment you mentioned, if I imagine myself as a subject in that experiment I realised that if they were advocating a position that I reviled, I would just take the question, "Are they being civil" very literally. In contrast, if they were advocating a position that I endorsed, I would interpret the question as, "Is it alright to offend people to make progress in society? To what extent do the ends justify the means?"
So it seems as though the context could put one in a certain frame of mind. Perhaps subjects were actually answering two different questions, determined by the mindset induced by the context.

Similarly, in the "expert" experiment, I think when faced with the expert as the good guy, my attitude was something like, "well, I could try to discredit him, and he won't be perfect. No one is perfect. But at least he isn't an obstacle in making progress on this issue." In other words, it is a lesser priority to discredit someone who is an idiot and who is right by mistake, than it is to discredit a genius who is part of a problem in society.
So apathy may be a factor here.

If one child in a classroom gets a maths question right, and another gets it wrong, and you only have time for one child, then you know that you have a better chance of making a positive difference if you concentrate on the child who got it wrong. The one who got it right may have simply been lucky with an incorrect method, but it is more likely that he is simply right. In contrast, you definitely know that there is at least 1 problem that you could identify and correct for the other child. Devoting more effort to someone who appears to be wrong is simply logical.

In other words, if you actually are a scientist who miraculously does not suffer from any motivated reasoning whatsoever and came to your conclusion on climate change due to very comprehensive and rigorous study, then you cannot be certain that all people who agree with you are thinking clearly, but you could be reasonably sure that all people who came to a different conclusion have gone wrong somewhere (or if not, then you could learn from them).

And this greater amount of effort put into people who disagree could contribute to the identification of more flaws in their reasoning. This could be partially driving the results found in the "expert" experiment.
I would also expect people who are thinking more clearly to be more concerned with assessing the accuracy of claims themselves more so than judging people. So I would expect a clearer thinker to perceive the questions posed in these experiments to possibly be on the irrelevant side.

After all, you hold a position because you think it is correct. So you don't have a reason to necessarily expect to find anything wrong with the arguments of an "ally". And you don't necessarily have a reason to be worried about them causing harm on that issue. You have much more reason to expend more effort on those who you perceive to be mistaken. Such behaviour doesn't seem particularly irrational to me. Merely practical.

Consider a person who takes on faith a physicist simply because he also thinks the world is round, yet looks for faults in the words of a purported physicist who thinks that the world is flat. Should we condemn the listener for thinking that the guy who got it right is more trustworthy? Is this the problematic behaviour that we are trying to find and fix?

It makes me wonder if we still have to figure out exactly what problematic mechanism we are trying to identify in people. Perhaps what we are looking for is a genuine open-mindedness to being mistaken, or convinced of a different position on an issue. I feel like these experiments mentioned in your article are very close, but not quite there.

July 29, 2012 | Unregistered CommenterGuest

These are good points. I will address them in a future post that tries to unpack the logic of confirmation bias more completely.

For now, I'll just say one thing, which relates to the protestor study. If the outcome measures in the study had been framed in a manner that would have made it logically consistent for someone to come out one way in the case of an abortion-clinic protestor and another in a case of a DADT protestor on the ground that one is "offensive" and the other not, the study would have been poorly designed.

In fact, the law says one *can't* draw distinctions of that sort in deciding who gets to speak. One can limit expressive activities only on the basis of harms that can be defined independently of negative reactions to the message or ideas being expressed. "Offensiveness" definitely fails this test. "Incivility" is not a good standard either b/c it is vague; it is "uncivilized" to "block" and "push" people -- things that impose harms on their independent of their or anyone else's negative appraisals of any message the blockers and pushers might be expressing; but it might also be considered "uncivilized" to make a person who is doing something feeling shame or guilty or sadness or anger -- yet that is *not* a harm that is independent of their negative appraisal of the idea being expressed by the person who causes such a reaction in another w/ speech. See Kagan, E. Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine. U. Chi. L. Rev. 63, 413 (1996).

Accordingly, in the study, we used outcome measures that involved perceptions of harms unrelated to the "communicative impact" of the protestors' speech. Did the protestors shove, block, scream in face, etc? The goal of the study was to test the hypothesis that people's perceptions of noncommunicative harms would be unconsciously influenced by the negative appraissals of ideas and messages that the Constitution says cannot be a basis for restricting expressive activity.

I think this point is separate -- and less interesting! -- than the main one you put. So let me say more about that & in a post, so that others who might have the same reaction are more likely to see our exchange.

August 5, 2012 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>