follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Some experimental data on CRT, ideology, and motivated reasoning (probing Mooney's Republican Brain) | Main | Gun control, climate change & motivated cognition of "scientific consensus" »

What do I think of Mooney's "Republican Brain"?

Everyone knows that science journalist Chris Mooney has written a book entitled The Republican Brain. In it, he synthesizes a wealth of social science studies in support of the conclusion that having a conservative political outlook is associated with lack of reflection and closed-mindedness.

I read it. And I liked it a lot.

Mooney possess the signature craft skills of a first-rate science journalist, including the intelligence (and sheer determination) necessary to critically engage all manner of technical material, and the expositional skill required to simultaneously educate and entertain.

He’s also diligent and fair minded. 

And of course he’s spirited: he has a point of view plus a strong desire to persuade—features that for me make the experience of reading Mooney’s articles and books a lot of fun, whether I agree with his conclusions (as often I do) or not.

As it turns out, I don’t feel persuaded of the central thesis of The Republican Brain. That is, I’m not convinced that the mass of studies that it draws on supports the inference that Republicans/conservatives reason in a manner that is different from and less reasoned than Democrats/liberals.

The problem, though, is with the studies, not Mooney’s synthesis.  Indeed, Mooney’s account of the studies enabled me to form a keener sense of exactly what I think the defects are in this body of work. That’s a testament to how good he is at what he does.

In this, the first of two (additional; this issue is impossible to get away from) posts, I’m going to discuss what I think the shortcomings in these studies are. In the next post, I’ll present some results from a new study of my own, the design of which was informed by this evaluation.

1. Validity of quality-of-reasoning measures

The studies Mooney assembles are not all of a piece but the ones that play the largest role in the book and in the literature correlate ideology or party affiliation with one or another measure of cognitive processing and conclude that conservativism is associated with “lower” quality reasoning or closed-mindedness.

These measures, though, are of questionable validity. Many are based on self-reporting; "need for cognition," for example, literally just asks people whether the "notion of thinking abstractly is appealing to" them, etc. Others use various personality-style constructs like “authoritarian” personality that researchers believe are associated with dogmatism. Evidence that these sorts of scales actually measure what they say is spare.

Objective measures—ones that measure performance on specific cognitive tasks—are much better. The best  of these, in my view, are the “cognitive reflection test” (CRT) which measures the disposition to check intuition with conscious analytica reasoning, and “numeracy,” which measures quantatative reasoning capacity, and includes CRT as a subcomponent.

These measures have been validated. That is, they have been shown to predict—very strongly—the disposition of people either to fall prey to or avoid one or another form of cognitive bias. 

As far as I know, CRT and numeracy don’t correlate in any clear way with ideology, cultural predispositions, or the like. Indeed, I myself have collected evidence showing they don’t (and have talked with other researchers who report the same).

2. Relationship between quality-of-reasoning measures and motivated cognition

Another problem: it’s not clear that the sorts of things that even a valid measure of reasoning quality gets at have any bearing on the phenomenon Mooney is trying to explain. 

That phenomenon, I take it, is the persistence of cultural or ideological conflict over risks and other facts that admit of scientific evidence. Even if those quality-of-reasoning measures that figure in the studies Mooney cites are in fact valid, I don’t think they furnish any strong basis for inferring anything about the source of controversy over policy-relevant science. 

Mooney believes, as do I, that such conflicts are likely the product of motivated reasoning—which refers to the tendency of people to fit their assessment of information (not just scientific evidence, but argument strength, source credibility, etc.) to some end or goal extrinsic to forming accurate beliefs. The end or goal in question here is promotion of one’s ideology or perhaps securing of one’s connection to others who share it.

There’s no convincing evidence I know of that the sorts of defects in cognition measured by quality of reasoning measures (of any sort) predict individuals’ vulnerability to motivated reasoning.

Indeed, there is strong evidence that motivated reasoning can infect or bias higher level processing—analytical or systematic, as it has been called traditionally; or “System 2” in Kahneman’s adaptation—as well as lower-level, heuristic or “System 1” reasoning.

We aren’t the only researchers who have demonstrated this, but we did in fact find evidence supporting this conclusion in our recent Nature Climate Change study. That study found that cultural polarization—the signature of motivated reasoning here—is actually greatest among persons who are highest in numeracy and scientific literacy. Such individuals, we concluded, are using their greater facility in reasoning to nail down even more tightly the connection between their beliefs and their cultural predispositions or identities.

So, even if it were the case that liberals or Democrats scored “higher” on quality of reasoning measures, there’s no evidence to think they would be immune from motivated reasoning. Indeed, they might just be even more disposed to use it and use it effectively (although I myself doubt that this is true; as I’ve explained previously, I think ideologically motivated reasoning is uniform across cultural and ideological types.)

3. Internal validity of motivated reasoning/biased assimilation experiments

The way to figure out whether motivated reasoning is correlated with ideology or culture is with experiments. There are some out there, and Mooney mentions a few.  But I don’t think those studies are appropriately designed to measure asymmetry of motivated reasoning; indeed I think many of them are just not well designed period.

A common design simply measures whether people with one or another ideology or perhaps existing commitment to a position change their minds when shown new evidence. If they don’t—and if in fact, the participants form different views on the persuasiveness of the evidence—this is counted as evidence of motivated reasoning.

Well, it really isn’t. People can form different views of evidence without engaging in motivated reasoning. Indeed, their different assessments of the evidence might explain why they are coming into the experiment in question with different beliefs.  The study results, in that case, would be showing only that people who’ve already considered evidence and reached a result don’t change their mind when you ask them to do it again. So what?

Sometimes studies designed in this way, however, do show that “one side” budges more in the face of evidence that contradicts their position (on nuclear power, say) than the other does on that issue or on some other (say, climate change).

Well, again, this is not evidence that the one that’s holding fast is engaged in motivated reasoning. Again, those on that side might have already considered the evidence in question and rejected it; they might be wrong to reject it, but because we don’t know why they rejected it earlier, their disposition to reach the same conclusion again does not show they are engaged in motivated reasoning, which consists in a disposition to attend to information in a selective and biased fashion oriented to supporting one’s ideology.

Indeed, the evidence that challenges the position of the side that isn’t budging in such an experiment might in fact be weaker than the evidence that is moving the other side to reconsider. The design doesn’t rule this out—so the only basis for inferring that motivated reasoning is at work is whatever assumptions one started with, which gain no additional support from the study results themselves.

There is, in my view, only one compelling way to test the hypothesis that motivated reasoning explains the evaluation of information. That’s to experimentally manipulate the ideological (or cultural) implications of the information or evidence that subjects are being exposed to. If they credit that evidence when doing so is culturally/ideologically congenial, and dismiss it when doing so is ideologically uncongenial, then you know that they are fitting their assessment of information (the likelihood ratio they assign to it, in Bayesian terms) to their cultural or ideological predispositions.

CCP has done studies like that. In one, e.g., we showed that individuals who watched a video of protestors reported perceiving them to be engaged in intimidating behavior—blocking, obstructing, shouting in onlookers’ faces, etc.—when the subjects believed the protest involved a cause (either opposition to abortion rights or objection to the exclusion of gays and lesbians from the military) that was hostile to their own values. If the subjects were told the protestors’ cause was one that affirmed the subjects' own values, then they saw the protestors as engaged in peaceful, persuasive advocacy.

That’s motivated reasoning.  One and the same piece of evidence—videotaped behavior of political protests—was seen one way or another (assigned a likelihood ratio different from or equal to 1) depending on the cultural congeniality of seeing it that way.

In another study, we found that subjects engage in motivated reasoning when assessing the expertise of scientists on disputed risk issues. In that one, how likely subjects were to recognize a scientist as an “expert” on climate change, gun control, or nuclear power depended on the position that scientist was represented to be taking. We manipulated that—while holding the qualifications of the scientist, including his membership in the National Academy of Sciences, constant.

Motivated reasoning is unambiguously at work when one credits or discredits the same piece evidence depending on whether it supports or contradicts a conclusion that one finds ideologically appealing. And again we saw that process of opportunistic, closed-minded assessment of evidence at work across cultural and ideological groups.

Actually, CM discusses this second study in his book. He notes that the effect size—the degree to which individuals selectively afforded or denied weight to the view of the featured scientist depending on the scientists’ position—was larger in individuals who subscribed to a hierarchical individualistic worldviews (they tend to be more conservative) than in individuals who subscribed to an egalitarian, communitarian one. The former tend to be more conservative, the latter more liberal.

As elsewhere in the book, he was reporting with perfect accuracy here.

Nevertheless, I myself don’t view the study as supporting any particular inference that conservatives or Republicans are more prone to motivated reasoning. Both sides (as it were) displayed motivated reasoning—plenty of it. What’s more, the measures we used didn’t allow us to assess the significance of any difference in the degree of it that each side displayed. Finally, we’ve done other studies, including the one involving the videotape of the protestors, in which the effect sizes were clearly comparable in size.

But here’s the point: to be a valid, a study that finds asymmetry in ideologically motivated reasoning must allow the researcher both to conclude that subjects are selectively crediting or discrediting evidence conditional on its congruence with their cultural values or ideology and that one side is doing that to a degree that is both statistically and practically more pronounced than the other.

Studies that don’t do that might do other things--like supply occasions for sneers and self-congratulatory pats on the back among those who treat cheering for "their" poilitical ideology as akin to rooting for their favorite professional sports team (I know Mooney certainly doesn’t do that).

But they don’t tell us anything about the source of our democracy’s disagreements about various forms of policy-relevant science.

In the next post in this “series,” I’ll present some evidence that I think does help to sort out whether an ideologically uneven propensity to engage in ideologically motivated reasoning is the real culprit. 

Posts 2 & 3


Chen, Serena, Kimberly Duckworth, and Shelly Chaiken. Motivated Heuristic and Systematic Processing. Psychological Inquiry 10, no. 1 (1999): 44-49.

Frederick, Shane. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, no. 4 (2005): 25-42.

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change advance online publication (2012).

Liberali, Jordana M., Valerie F. Reyna, Sarah Furlan, Lilian M. Stein, and Seth T. Pardo. "Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment." Journal of Behavioral Decision Making  (2011):advance on line publication.

Mooney, C. The Republican Brain: The Science of Why They Deny Science—and Reality. (John Wiley & Sons, Hoboken, NJ; 2012).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

Weller, J.A., Dieckmann, N.F., Tusler, M., Mertz, C.K., Burns, W.J. & Peters, E. Development and Testing of an Abbreviated Numeracy Scale: A Rasch Analysis Approach. Journal of Behavioral Decision Making, advance on line publication (2012).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (3)

Fascinating. Thank you for bringing this information to my attention.

I do have some questions.
For instance in the "protesters" experiment you mentioned, if I imagine myself as a subject in that experiment I realised that if they were advocating a position that I reviled, I would just take the question, "Are they being civil" very literally. In contrast, if they were advocating a position that I endorsed, I would interpret the question as, "Is it alright to offend people to make progress in society? To what extent do the ends justify the means?"
So it seems as though the context could put one in a certain frame of mind. Perhaps subjects were actually answering two different questions, determined by the mindset induced by the context.

Similarly, in the "expert" experiment, I think when faced with the expert as the good guy, my attitude was something like, "well, I could try to discredit him, and he won't be perfect. No one is perfect. But at least he isn't an obstacle in making progress on this issue." In other words, it is a lesser priority to discredit someone who is an idiot and who is right by mistake, than it is to discredit a genius who is part of a problem in society.
So apathy may be a factor here.

If one child in a classroom gets a maths question right, and another gets it wrong, and you only have time for one child, then you know that you have a better chance of making a positive difference if you concentrate on the child who got it wrong. The one who got it right may have simply been lucky with an incorrect method, but it is more likely that he is simply right. In contrast, you definitely know that there is at least 1 problem that you could identify and correct for the other child. Devoting more effort to someone who appears to be wrong is simply logical.

In other words, if you actually are a scientist who miraculously does not suffer from any motivated reasoning whatsoever and came to your conclusion on climate change due to very comprehensive and rigorous study, then you cannot be certain that all people who agree with you are thinking clearly, but you could be reasonably sure that all people who came to a different conclusion have gone wrong somewhere (or if not, then you could learn from them).

And this greater amount of effort put into people who disagree could contribute to the identification of more flaws in their reasoning. This could be partially driving the results found in the "expert" experiment.
I would also expect people who are thinking more clearly to be more concerned with assessing the accuracy of claims themselves more so than judging people. So I would expect a clearer thinker to perceive the questions posed in these experiments to possibly be on the irrelevant side.

After all, you hold a position because you think it is correct. So you don't have a reason to necessarily expect to find anything wrong with the arguments of an "ally". And you don't necessarily have a reason to be worried about them causing harm on that issue. You have much more reason to expend more effort on those who you perceive to be mistaken. Such behaviour doesn't seem particularly irrational to me. Merely practical.

Consider a person who takes on faith a physicist simply because he also thinks the world is round, yet looks for faults in the words of a purported physicist who thinks that the world is flat. Should we condemn the listener for thinking that the guy who got it right is more trustworthy? Is this the problematic behaviour that we are trying to find and fix?

It makes me wonder if we still have to figure out exactly what problematic mechanism we are trying to identify in people. Perhaps what we are looking for is a genuine open-mindedness to being mistaken, or convinced of a different position on an issue. I feel like these experiments mentioned in your article are very close, but not quite there.

July 29, 2012 | Unregistered CommenterGuest

These are good points. I will address them in a future post that tries to unpack the logic of confirmation bias more completely.

For now, I'll just say one thing, which relates to the protestor study. If the outcome measures in the study had been framed in a manner that would have made it logically consistent for someone to come out one way in the case of an abortion-clinic protestor and another in a case of a DADT protestor on the ground that one is "offensive" and the other not, the study would have been poorly designed.

In fact, the law says one *can't* draw distinctions of that sort in deciding who gets to speak. One can limit expressive activities only on the basis of harms that can be defined independently of negative reactions to the message or ideas being expressed. "Offensiveness" definitely fails this test. "Incivility" is not a good standard either b/c it is vague; it is "uncivilized" to "block" and "push" people -- things that impose harms on their independent of their or anyone else's negative appraisals of any message the blockers and pushers might be expressing; but it might also be considered "uncivilized" to make a person who is doing something feeling shame or guilty or sadness or anger -- yet that is *not* a harm that is independent of their negative appraisal of the idea being expressed by the person who causes such a reaction in another w/ speech. See Kagan, E. Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine. U. Chi. L. Rev. 63, 413 (1996).

Accordingly, in the study, we used outcome measures that involved perceptions of harms unrelated to the "communicative impact" of the protestors' speech. Did the protestors shove, block, scream in face, etc? The goal of the study was to test the hypothesis that people's perceptions of noncommunicative harms would be unconsciously influenced by the negative appraissals of ideas and messages that the Constitution says cannot be a basis for restricting expressive activity.

I think this point is separate -- and less interesting! -- than the main one you put. So let me say more about that & in a post, so that others who might have the same reaction are more likely to see our exchange.

August 5, 2012 | Registered CommenterDan Kahan

Dr Kahan's comments were made in 2012.

A few years on, and Chris Mooney's thesis is not only well supported, it now appears understated.

I think the difficulty is that Dr Kahan does not have a background in psychology, to comment on what are psychological heavy issues. Further, It also comes across that Dr. Kahan, and even Chris Mooney missed much of the literature.

For example, Dr, Khan completely missed all the work in showing that RWA associated with conservatism and SDO also associated with conservatism predict prejudice

Roets, A., & Van Hiel, A. (2006). Need for closure relations with authoritarianism, conservative beliefs and racism: The impact of urgency and permanence tendencies. Psychologica Belgica, 46, 235-252.

More powerfully, Asbroch et al. 2009 showed in a longitudinal study that these are predictive of prejudice. European Journal of Personality
Eur. J. Pers. 24: 324–340 (2010)

"Right-Wing Authoritarianism and Social Dominance Orientation and the Dimensions of Generalized Prejudice: A Longitudinal Test"

The dual process model by Duckitt and Sibley below is a parsimonious account of all the research.

Duckitt, J., & Sibley, C.G. (2009). A dual process model of ideological attitudes and
system justification. In J.T. Jost, A.C. Kay, & H. Thorisdottir (Eds.), Social and psychological bases of ideology and system justification (pp. 292–313). New York: Oxford University Press.

In simple terms, what lays behind conservatives and especially RWAs greater inclination to be prejudiced is their reliance on intuition to guide their judgments and actions. Now attitudes operate in two modes, automatic and deliberate (Devine 1989a). And Devine also showed that some people are better at controlling and inhibiting their automatic attitudes. Automatic attitudes are cultural associations that become ingrained in people's minds without effort.

So, all the pieces of evidence are fitting together nicely. RWA rely on intuitions, and simple rules (order good, security good) and so on. Their gut tells them what is good or not. Hence, today we see Donald Trump announce that he will publish crimes perpetrated by immigrants. Gut reactions rather than thinking leads the way.

Now, to further bolster Mooney's thesis, RWA tend to have lower cognitive resources. This thesis has been around for many decades, and has received empirical support from Deary et al (2008), where higher IQ at 10 predicted liberal attitudes at 30. This makes sense because liberals, tend to score higher on openness to experience, and this in turn has a correlation with IQ of about .30.

Further support have come from Heaven, 2011, and Dhont and Hodson, 2014. Probably, the most impressive results have come from Onraet et al. 2015 in a large meta analysis of over 84,000 people from 67, studies. I hardly need to mention that meta-analysis are powerful studies, like the longitiduinal research mentioned earlier.

Research by Kossowaska et al. below demonstrates, that RWA show a number of cognitive deficits, that would predispose people towards seeking certainty, and forgoing deliberate thought when not deemed necessary. Some of these are, lower, IQ, processing speed, lower inability to inhibit proponent responses, and lower working memory (a necessity for reasoning). Hence the preference for more heuristic thinking.

Cognitive ability and political beliefs relationships have also been found by Carl. 2015.

Chapter 22
Motivation Towards Closure and Cognitive Resources: An Individual Differences Approach
Małgorzata Kossowska, Edward Orehek, and Arie W. Kruglanski

The recent election saw 81% of White Evangelicals vote for Donald Trump. This was predicted (though not this truly exceptionally high number) prior to the election. Religious people, especially evangelicals, are known to rely more on intuition, and have a lower IQ, even when controlling for socio economic status.

Here is an abstract for a recent piece of research.

With Donald Trump the Republican nominee and Hillary Clinton the Democratic nominee for the 2016 U.S. Pres- idential election, speculations of why Trump resonates with many Americans are widespread - as are supposi- tions of whether, independent of party identification, people might vote for Hillary Clinton. The present study, using a sample of American adults (n = 406), investigated whether two ideological beliefs, namely, right-wing authoritarianism (RWA) and social dominance orientation (SDO) uniquely predicted Trump support and voting intentions for Clinton. Cognitive ability as a predictor of RWA and SDO was also tested. Path analyses, controlling for political party identification, revealed that higher RWA and SDO uniquely predicted more favorable attitudes of Trump, greater intentions to vote for Trump, and lower intentions to vote for Clinton. Lower cognitive ability predicted greater RWA and SDO and indirectly predicted more favorable Trump attitudes, greater intentions to vote for Trump and lower intentions to vote for Clinton

So, I would argue, that a very expansive review of the literature would agree with Jost et al. (2003) and kruglanski 1996 and Altmeyer, 1981, 1988, 1998 arguments.

Jost, J. T., & Krochik, M. (2014) have recently produced a comprehensive literature on this subject.

I have added in only a tiny proportion of the literature on prejudice that supports their arguments.

Dr. Kahan, condemns some of the measures, however, I assure the readers, that measures of RWA are now well validated, as are cognitive ability. What is more, these theories make predictions that have been borne out. The implication is that there must be some understanding of what is going on.

I think Dr Kahan, could benefit from a wider reading to examine the converging and mutually supportive evidence. And have a relook at methodology.

From the standpoint of 2017, Mooney did not go far enough. It is a book well worth reading.

March 1, 2017 | Unregistered CommenterDenis Daly

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>