follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« MAPKIA! episode 73 sequel: Scholars who genuinely know something explain disgust's contribution to vaccine & GM food risk perceptions | Main | *See* "cognitive reflection" *magnify* (ideologically symmetric) motivated reasoning ... (not for faint of heart) »
Friday
Jun122015

"Politically Motivated Reasoning Paradigm" (PMRP): what it is, how to measure it

1. What’s this about. Here are some reflections on measuring the impact of “motivated reasoning” in mass political opinion formation.

They are not materially different form ones  I’ve either posted here previously or discussed in published papers (Kahan 2015; Kahan 2012). But they display points of emphasis that complement and extend those, and thus maybe add something. 

In any case, the need for more reflection on how to measure “motivated reasoning” in this setting demands more reflection—not just by me, but by the scholars doing work in this area, since in my view many of the methods being used are plainly not valid.

2. Terminology. “Identity-protective reasoning” is the tendency of individuals selectively to credit or discredit all manner of evidence on contested issues in patterns that support the position that predominates among persons with whom they share some important, identity-defining affinity (Sherman & Cohen 2006).

This is the form of information processing that creates polarization on politically charged issues like climate change, gun control, nuclear power, the HPV vaccine, and fracking.  Frankly, I don’t think very many people “define” themselves with reference to ideological groups (and certainly not many ordinary ones; only very odd people spend a lot of time thinking about politics). But the persons in the groups with whom they do share ties are likely to share various kinds of important values that have political significance; as a result, political outlooks (and better still, cultural ones) will often furnish a decent proxy (or indicator) for the particular group affinities that define people’s identities.

For simplicity, though, I will just refer to the species of motivated reasoning that figures in the study of mass political opinion formation as “politically motivated reasoning.”

What I want to do is suggest a conception of politically motivated reasoning that simultaneously reflects a cogent account of what it is and a corresponding valid way to experimentally assess what impact it has if any.

I will call this the “Politically Motivated Reasoning Paradigm”—or PMRP.

3. Information-processing mechanisms.  In my view, it is useful to specify PMRP in relation to a very basic, no-frills Bayesian information-processing model. Indeed, I think that’s the way to specify pretty much any posited cognitive mechanism of information-processing.  When obliged to identify how the mechanism in question differs from the no-frills Bayesian model, the person giving the account is forced to be clear and precise about the key features of the information-processing dynamic she has in mind. This sort of account, moreover, is the one most likely to enable reflective people to discern forms of empirical investigation aimed at assessing whether the mechanism is real and how it operates.

So start with this figure: 

The Bayesian model (A) not only directs individuals to use new evidence to update their existing or prior belief on the probability of some factual proposition but also tells them to what degree they should adjust that belief: by a factor equal to its “likelihood ratio,” which represents how much more consistent the evidence is with that proposition than some alternative.  The Bayesian “likelihood ratio” is the “weight of the evidence” in practical or everyday terms (Good 1985).

When an individual displays “confirmation bias” (B), that person credits evidence selectively based on its consistency with his or her existing beliefs.  In relationship to a simple Bayesian model, then, confirmation bias involves an endogeneity between priors and likelihood ratio: that is, rather than updating ones priors based on the weight of the evidence, a person assigns weight to the new evidence based on its conformity with his or her priors.

This might well be “consistent” with Bayesianism, which only tells a person what to do with his or her prior odds and likelihood ratio—multiply them together—and not how to derive either. But if one's goal is to form accurate beliefs, one should assign new information a likelihood ratio derived from some set of valid, truth-convergent criteria independent of one’s priors, as in (A)  (Stanovich 2011, p. 135).  If a person determines the likelihood ratio (weight of the new evidence) based entirely on his or her priors, that person will in fact never change his or her position or even how intensely he or she holds it no matter what valid evidence that  individual encounters (Rabin & Schrag 1999). 

In a less extreme case, if such a person incorporates his or her priors along with independent, valid, truth-convergent criteria into his or her determination of the likelihood ratio, that person will, eventually, start to form more accurate beliefs, but at a slower rate than if he or she had determined the likelihood ratio with valid criteria wholly independent of his or her priors.

Again, motivated reasoning refers to the tendency to weight evidence in relation to some external goal or end independent of forming an accurate belief. Reasoning is “politically motivated” when external goal or end is congruence between one’s beliefs and those associated with those who share one’s political outlooks (Kahan 2013).  In relation to the Bayesian model (A), then, an ideological predisposition is what determines the likelihood ratio one assigns new evidence (C).

As should be reasonable clear, politically motivated reasoning is not the same thing as confirmation bias.  Under confirmation bias, it is a person’s priors, not her ideological or political predispositions, that governs the likelihood ratio he or she assigns new information. 

Because someone who processes information in an ideologically motivated way will predictably end up with beliefs or priors that reflect his or her ideology, it will often look as if that person is engaged in “confirmation bias” when she assigns weight to the evidence based on its conformity to her political predispositions.  But the appearance is in fact spurious: the person’s priors are not determining his or her likelihood ratio; rather his or her priors and the likelihood ratio he or she assigns to new information are both being determined by that person’s political predispositions (D).

This matters A theory that posits individuals will conform the likelihood ratio of new information to their political predispositions generates different predictions than one that posits they will simply conform their likelihood ratio of new information to their existing beliefs.  E.g., the former but not the latter furnishes reason to expect systematic partisan differences in assessments of information relating to novel issues, on which individuals have no meaningful priors (Kahan et al. 2009).  The former also helps to identify conditions in which individuals will actually consider counter-attitudinal information open-mindedly (Kahan et al. 2015).

4. Validly measuring “politically motivated reasoning.”  Understanding politically motivated reasoning in relation to Bayesianism—and getting how it differs from conformation bias—also makes it possible to evaluate the validity of study designs that test for politically motivated reasoning. 

For one thing, it does not suffice to show (as many invalid studies do) that individuals do not “change their mind” (or that partisans do not converge) when furnished with counter-attitudinal information.  Such a result is consistent with someone actually crediting ideologically noncongruent evidence but persisting in his or her position (albeit with a reduced level of intensity) based on the strength of his or her priors (Gerber & Green 1999).

This design also disregards pre-treatment effects. Subjects who have been bombarded with arguments on issues like global warming or the death penalty prior to the study might disregard—assign a likelihood ratio of one—to counter-attitudinal evidence furnished by the experimenter not because they are biased but because they’ve seen and evaluated it or the equivalent already (Druckman 2012).

Another common but patently defective design is to furnish partisans with distinct pieces of “contrary evidence.” Those on one side of an issue—the death penalty, say—might be furnished with separate “pro-” and “con-” arguments.  Or “liberals” who are opposed to nuclear power might be shown evidence that it is safe, and “conservatives” who don’t believe in climate change evidence that it is occurring, is caused by humans, and is dangerous.  Then the researcher measures how much partisans of each type “change” their respective positions.

In such a design, it is impossible to determine whether the “contrary” evidence furnished conservatives on the death penalty or on global warming (in my examples) is in fact as strong—has as high a likelihood ratio—as the “contrary evidence” furnished liberals on the death penalty or on nuclear power. Accordingly, the failure of one group to "change its views" or change them to the same extent as the others supports no inferences about the relative impact of their political predispositions on the weight (likelihood ratios) they assigned to the evidence.

The design is invalid, then, plain and simple.

The “most compelling experimental test” of politically motivated reasoning “involves manipulating the hypothesized motivating stake” by changing the perceived ideological significance of the evidence “and then assessing how that manipulation affects the weight individuals of opposing [ideological] identities assign to one and the same piece of evidence (say, a videotape of a political protest)” (Kahan 2015, p. 59).  If the subjects “opportunistically adjust the weight they assign the evidence consistently with its perceived” ideological valence, then they are displaying ideologically motivated reasoning (ibid.).  If they in fact use this form of information processing in the real world, individuals of opposing outlooks will not converge but instead polarize even when they rely on the same information (Kahan et al. 2011).

5. PMRP. That’s PMRP, then. Again, conceptually, PMRP consists is the opportunistic adjustment of the likelihood ratio assigned to evidence based on its conformity to conclusions that reflect the ones associated with one’s political outlooks or predispositions.  Methodologically, it is reliably tested for by experimentally manipulating the perceived ideological significance of one and the same piece of evidence and assessing whether individuals, consistent with manipulation, adjust their assessment of the validity or weight (the likelihood ratio, conceptually speaking) assigned to the evidence.

There are many studies that reflect PMRP (e.g., Cohen 2003).  I plan to compile a list of them and to post it “tomorrow.”

But for now, here's a collection of CCP studies that have been informed by PMRP.  They show things like individuals polarizing over whether filmed political protestors resorted to violence against onlookers (Kahan et al. 2012); whether particular scientists are subject matter experts on issues like climate change, gun control, and nuclear power (Kahan et al. 2011); whether the Cognitive Reflection Test is a valid way to measure the open-mindedness of partisans on issues like climate change (Kahan 2013); whether a climate-change study was valid (Kahan et al. 2015); and what inferences are supported by experimental evidence on gun control reported in a 2x2 contingency table (Kahan et al. 2013).

There are many many many more studies that purport to study “politically motivated reasoning” that do not reflect PMRP.  I won’t bother to compile and post a list of those.

6. Blowhard blowdowns of straw people are boring. I will say, though, that scholars who—quite reasonably—are skeptical about “politically motivated reasoning” should not think they are helping anyone to learn anything by pointing out the flaws in studies that don’t conform to PMRP.  The studies that do reflect PMRP were designed with exactly those flaws in mind.

So if one wants to cast doubt on the reality or significance of “politically motivated reasoning” (or cast doubt on it in the minds of people who actually know what the state of the scholarship is; go ahead and attack straw people if you just want to get attention and commendation from people who are unfamiliar), they should focus on PMRP studies.

References

Cohen, G.L. Party over Policy: The Dominating Impact of Group Influence on Political Beliefs. J. Personality & Soc. Psych. 85, 808-822 (2003).

Druckman, J.N. The Politics of Motivation. Critical Review 24, 199-216 (2012).

Gerber, A. & Green, D. Misperceptions about Perceptual Bias. Annual Review of Political Science 2, 189-210 (1999).

Good, I.J. Weight of evidence: A brief survey. in Bayesian statistics 2: Proceedings of the Second Valencia International Meeting (ed. J.M. Bernardo, M.H. DeGroot, D.V. Lindley & A.F.M. Smith) 249-270 (Elsevier, North-Holland, 1985).

Kahan, D.M. Cognitive Bias and the Constitution. Chi.-Kent L. Rev. 88, 367-410 (2012).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. Laws of Cognition and the Cognition of Law. Cognition 135, 56-60 (2015).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015).

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116  (2013).

Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias*. Quarterly Journal of Economics 114, 37-82 (1999).

Sherman, D.K. & Cohen, G.L. The Psychology of Self-defense: Self-Affirmation Theory. in Advances in Experimental Social Psychology 183-242 (Academic Press, 2006).

Stanovich, K.E. Rationality and the reflective mind (Oxford University Press, New York, 2011).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Of possible interest, the first of two "polar facts" articles coming out this week.
"Tracking public knowledge and perceptions about the Arctic".
http://www.arcus.org/witness-the-arctic/2015/2/article/23160

The second is a more analytical piece in Polar Geography, including some results from a very simple "polar knowledge" quiz and its relation to self-assessed "understanding." I see blurring between ideological predispositions and factual beliefs, these might be more separable in a lab than in the wild.

June 14, 2015 | Unregistered CommenterL Hamilton

@L Hamilton-- cool (so to speak)! thanks for the heads up. (I can be smarter than everyone else for at least a few hrs)

June 14, 2015 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>