follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: "Color" preprint of " 'Ideology' vs. 'Situation Sense' "! | Main | Testing for "politically motivated reasoning": 2 nasty confounds »
Thursday
Dec172015

Solving 2 nasty confounds: The "Politically Motivated Reasoning Paradigm [PMRP] Design"

Okay, so “yesterday,” I discussed the significance of two “confounds” in studies of “politically motivated reasoning.”

“Politically motivated reasoning” is the tendency of individuals to conform their assessment of the significance of evidence on contested societal risks and like facts to positions that are congenial to their political or cultural outlooks.

The “confounds” were heterogeneous priors and pretreatment effects. “Today” I want to address how to avoid the nasty effects of these confounds.

The inference-defeating consequences of heterogeneous priors and pretreatment effects are associated with a particular kind of study design. 

In it, the researcher exposes individuals of opposing political or cultural identities to counter-attitudinal information on a hotly contested topic such as gun control or climate change. Typically, the information is in the form of empirical studies or advocacy materials, real or fictional. If the information exposure fails to narrow, or even widens, the gap in the positions of subjects of opposing identities, this outcome is treated as evidence of politically motivated reasoning.

But as I explained in the last post, this inference is unsound.  

Imagine, e.g., that members of one politically identifiable group might be more uniformly committed to “their side’s” position than the those of another, some of whose members might be weakly supportive of the former’s position. If so, we would expect members of the latter group to be overrepresented among the subjects who “change their minds” when members of both groups are exposed to evidence more supportive of the other group’s position.  This is the “heterogeneous priors” confound.

You can't judge an experiment by its results; only by its design . . . .Alternatively, a greater proportion of one group might already have been exposed to evidence equivalent to that featured in the study design.  In that case, fewer members of that group would be expected to change their mind—not because they were biased but because they would have already adjusted their beliefs to take account of it. This is the “pretreatment effect” confound.

Put these two confounds together, and it’s clear that, under the design I described, no outcome is genuinely inconsistent with subjects having assessed the information in the “politically unbiased” manner associated with Bayesian information processing (Druckman, Fein & Leeper 2012; Druckman 2012; Bullock 2009; Gerber & Green 1999).

The solution, then, is to change the design.

If you can't remember exactly what the difference is between politically motivated reasoning, confirmation bias, and Bayesian information processing, click here. If you can, click here anyway!That’s one of the central points of The Politically Motivated Reasoning Paradigm (in press).  In that paper, I describe studies (e.g., Uhlman, Pizzaro, Tannenbaum, & Ditto 2009; Bolsen, Druckman & Cook 2014; Scurich & Shniderman 2014) that use a common strategy to avoid the confounding effects of heterogeneous priors and pretreatment effects.  I refer to it as the “PMRP” (for “Politically Motivated Reasoning Paradigm) “design.”

Under the PMRP design, the researcher manipulates the subjects’ perception of the consequences of crediting one and the same piece of evidence.  What’s compared is not individual subjects’ reported beliefs before and after being exposed to information but rather the weight or significance subjects of opposing predispositions attach to the evidence conditional on the experimental manipulation(cf. Koehler 1993). If subjects credit the evidence when they perceive it is consistent with their political predispositions but dismiss it when it’s not, then we can be confident that it is their politically biased weighing of evidence and not any discrepancy in priors or pre-study exposure to evidence that is driving subjects of opposing cultural or political identities apart.

One CCP study used the PMRP design to examine how study subjects of opposing cultural identities would assess the behavior of political protestors (Kahan, Hoffman, Evans, Braman & Rachlinski 2012). Instructed to adopt the perspective of juries in a civil case, the subjects examined a digital recording of demonstrators alleged to have assaulted passersby. The cause and identity of the demonstrators was manipulated: in one condition, they were described as “anti-abortion protestors” assembled outside the entrance to an abortion clinic; in the other, they were described as “gay-rights advocates” protesting the military’s “Don’t ask, don’t tell” policy outside a military-recruitment center.

Subjects of opposing “cultural worldviews” who were assigned to the same experimental condition—and who thus believed they were watching the same type of protest—reported forming opposing perceptions of whether the protestors “blocked” and “screamed in the face” of pedestrians trying to access the facility. At the same time, subjects who were assigned to different conditions—and who thus believed they were watching different types of protests—formed perceptions comparably different from subjects who shared their cultural worldviews.

go ahead, click it -- it won't bite!
In line with these opposing perceptions, the results in the two conditions produced mirror-image states of polarization on whether the behavior of the protestors met the factual preconditions for liability. 

But that outcome—an increased state of political polarization, in effect, in “beliefs”—is not, in my view, an essential one under the PMRP design. Indeed, if the issue featured in a study is familiar (like whether human beings are causing climate change, or whether permitting individuals to carry concealed firearms in public increases or decreases crime), we shouldn’t expect a one-shot exposure to evidence in the lab to change subjects' “positions.”

The only thing that matters is whether subjects of opposing outlooks opportunistically shifted the weight  (or in Bayesian terms, the likelihood ratio) they assigned to one and the same piece of evidence based on its congruence with their political predispositions.  If that’s how individuals of opposing cultural identities behave outside the lab, then contrary to what would occur under a Bayesian model of information processing they will not converge on politically contested facts no matter how much valid evidence they are furnished with.

Or won’t unless & until something is done in the world that changes the stake individuals with outlooks like those have in conforming their assessment of evidence to the positions then associated with their cultural identities (Kahan 2015).

The PMRP design is definitely not the only one that validly measures politically motivated reasoning. Indeed, the consistency of findings of studies that reflect the PMRP design and those based on other designs (e.g., Binning, Brick, Cameron, Cohen, & Sherman 2015; Nyhan, Riefler & Ubel 2015;  Druckman & Bolsen 2011; Bullock 2007; Cohen 2003) furnish more reason for confidence that the results of both are valid. Nevertheless, the test that the PMRP design is self-consciously constructed to pass—demonstration that individuals are opportunistically adjusting the weight they assign evidence to conform it to their political identities—supplies the proper standard for assessing whether the design of any particular study supports an inference of politically motivated reasoning.

References

Binning, K.R., Brick, C., Cohen, G.L. & Sherman, D.K. Going Along Versus Getting it Right: The Role of Self-Integrity in Political Conformity. Journal of Experimental Social Psychology 56, 73-88 (2015).

Bolsen, T., Druckman, J.N. & Cook, F.L. The influence of partisan motivated reasoning on public opinion. Polit. Behav. 36, 235-262 (2014).

Bullock, J. The enduring importance of false political beliefs. Unpublished Manuscript, Stanford University  (2007).

Bullock, J.G. Partisan Bias and the Bayesian Ideal in the Study of Public Opinion. The Journal of Politics 71, 1109-1124 (2009).

Cohen, G.L. Party over Policy: The Dominating Impact of Group Influence on Political Beliefs. J. Personality & Soc. Psych. 85, 808-822 (2003).

Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011).

Druckman, J.N., Fein, J. & Leeper, T.J. A source of bias in public opinion stability. American Political Science Review 106, 430-454 (2012).

Druckman, J.N. The Politics of Motivation. Critical Review 24, 199-216 (2012).

Druckman, J.N., Fein, J. & Leeper, T.J. A source of bias in public opinion stability. American Political Science Review 106, 430-454 (2012).

Gerber, A. & Green, D. Misperceptions about Perceptual Bias. Annual Review of Political Science 2, 189-210 (1999).

Kahan, D. M. The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences (in press).

Kahan, D. M. What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12 (2015).

Kahan, D. M., Hoffman, D. A., Braman, D., Evans, D., & Rachlinski, J. J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev., 64, 851-906 (2012).

Nyhan, B. & Reifler, J. The roles of information deficits and identity threat in the prevalence of misperceptions.  (2015), http://www.dartmouth.edu/~nyhan/opening-political-mind.pdf

Scurich, N. & Shniderman, A.B. The Selective Allure of Neuroscientific Explanations. PLoS One 9 (2014).

Uhlmann, E.L., Pizarro, D.A., Tannenbaum, D. & Ditto, P.H. The motivated use of moral principles. Judgment and Decision Making 4 (2009).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>