In a previous post, I acknowledged that a very excellent study by Liu & Ditto had some findings in it that were supportive of the “asymmetry thesis”—the idea that motivated reasoning and like processes more heavily skew the factual judgments of “conservatives” than “liberals.” Still, I said that “there's just [so] much more valid & compelling evidence in support of the 'symmetry' thesis—that ideologically motivated reasoning is uniform ... across ideologies—” that I saw no reason to “substantially revise my view of the likelihood” that the asymmetry position is actually correct.
An evil genius named Nick asks:
So what (~) likelihood ratio would you ascribe to this study for the hypothesis that the asymmetry thesis does not exist? And how can we be sure that you aren't using your prior to influence that assessment? ….
You acknowledge Liu & Ditto’s findings do support the asymmetry thesis, yet you state, without much explanation, that you “don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.”
… One way to think about it is that your LR for the Liu & Ditto study as it relates to the asymmetry hypothesis should be ~ equal to the LR from a person who is completely ignorant (in an E.T. Jaynes sense) about the Cultural Cognition findings that bear on the hypothesis. It is, of course, silly to think this way, and certainly no reader of this blog would be in this position, but such ignorance would provide an ‘unbiased’ estimate of the LR associated with the study. [note that is amendable to empirical testing.]
You may have simply have been stating that your prior on the asymmetry hypothesis is so low that the LR for this study does not change your posterior very much. That is perfectly coherent but I would still be interested in what’s happening to your LR (even if its effect on the posterior is trivial).
Well, of course, readers can’t be sure that my priors (1,000:1 that the “asymmetry thesis” is false) didn’t contaminate the likelihood ratio I assigned to L&D’s finding of asymmetry in their 2nd study (0.75; resulting in revised odds that "asymmetry thesis is false" = 750:1).
Worse still, I can’t.
Obviously, to avoid confirmation bias, I must make an assessment of the LR based on grounds unrelated to my priors. That’s clear enough—although it’s surprising how often people get this wrong when they characterize instances of motivated reasoning as “perfectly consistent with Bayesianism” since a person who attaches a low prior to some hypothesis can “rationally” discount evidence to the contrary. Folks: that way of thinking is confirmation bias--of the conscious variety.
The problem is that nothing in Bayes tells me how to determine the likelihood ratio to attach to the new evidence. I have to “feed” Bayes some independent assessment of how much more consistent the new evidence is with one hypothesis than another. ("How much more consistent,” formally speaking, is “how many times more likely." In assigning an LR of 0.75 to L&D, I’m saying that it is 1.33 x more consistent with “asymmetry” than “symmetry”; and of course, I’m just picking such a number arbitrarily—I’m using Bayes heuristically here and picking numbers that help to convey my attitude about the weight of the evidence in question).
So even if I think I am using independent criteria to assess the new information, how do I know that I’m not unconsciously selecting a likelihood ratio that reflects my priors (the sort of confirmation bias that psychology usually worries about)? The question would be even more pointed in this instance if I had assigned L&D a likelihood ratio of 1.0—equally consistent with asymmetry and symmetry—because then I wouldn’t have had to revise my prior estimation in the direction of crediting asymmetry a tad more. But maybe I’m still assigning an LR to the study (only that one small aspect of it, btw) that is not as substantially below 1.0 as I should because it would just be too devestating a blow to my self-esteem to give up the view that the asymmetry thesis is false.
Nick proposes that I go out and find someone who is utterly innocent of the entire "asymmetry" issue and ask her to think about all this and get back to me with her own LR so I can compare. Sure, that’s a nice idea in theory. But where is the person willing to do this? And if she doesn’t have any knowledge of this entire issue, why should I think she knows enough to make a reliable estimate of the LR?
To try to protect myself from confirmation bias—and I really really should try if I care about forming beliefs that fit the best available evidence—I follow a different procedure but one that has the same spirit as evil Nick’s.
I spell out my reasoning in some public place & try to entice other thoughtful and reflective people to tell me what they think. If they tell me they think my LR has been contaminated in that way, or simply respond in a way that suggests as much, then I have reason to worry—not only that I’m wrong but that I may be biased.
Obviously this strategy depends (among other things) on my being able to recognize thoughtful and reflective people being thoughtful and reflective even when they disagree with me. I think I can. Indeed, I make a point of trying to find thoughtful and reflective people with different priors all the time-- to be sure their judgment is not being influenced by confirmation bias when they assure me that my LR is “just right.”
Moreover, if I get people with a good enough mix of priors to weigh in, I can "simulate" the ideally "ignorant observer" that Nick conjures (that ignorant observer looks a lot like Maxwell's Demon, to me; the idea of doing Bayesian reasoning w/o priors would probably be a feat akin to violating the 2nd Law of Thermodynamics).
Nick the evil genius—and others who weighed in on the post to say I was wrong (not about this point but about another: whether L&D’s findings were at odds with Haidt & Graham’s account of the dispositions that motivate “liberals” and “conservatives”; I have relented and repented on that)—are helping me out in this respect!
But Nick points out that I didn’t say anything interesting about why I assigned such a modest LR to L&D on this particular point. That itself, I think, made him anxious enough to tell me that he was concerned that I might be suffering from confirmation bias. That makes me anxious.
So, thank you, evil Nick! I will say more. Not because I really feel impelled to tussle about how much weight to assign L&D on the asymmetry point; I think and suspect they agree that it would be nice simply to have more evidence that speaks more directly to the point. But now that Nick is helping me out, I do want to say enough so that he (and any other friendly person out there) can tell me if they think that my prior has snuck through and inserted itself into my LR.
In the study in question, L&D report that subjects' “deontological” positions—that is, the positions they held on nonconsequenialist moral grounds—tended to correlate with their view of the consequences of various disputed policies (viz., “forceful interrogation,” “condom promotion” to limit STDs, “capital punishment,” and “stem cell research”).
They also found that this correlation—this tendency to conclude that what one values intrinsically just happens to correlate with the course of action that will produce the state of affairs—increases as one becomes more “conservative” (although they also found that the correlation was still significant even for self-described “liberals”). In other words, on the policies in questions, liberals were more likely to hold positions that they were willing to concede might not have desirable consequences.
Well, that’s evidence, I agree, that is more consistent with the asymmetry thesis—that conservatives are more prone to motivated reasoning—than are liberals. But here's why I say it's not super strong evidence of that.
Imagine you and I are talking, Nick, and I say, "I think it is right to execute murderers, and in addition the death penalty deters." You say, "You know, I agree that the death penalty deters, but to me it is intrinsically wrong to execute people, so I’m against it.
I then say, "For crying out loud--let's talk about something else. I think torture can be useful in extracting information, & although it is not a good thing generally, it is morally permissible in extreme situations when there is reason to think it will save many lives. Agree?" You reply, "Nope. I do indeed accept that torture might be effective in extracting information but it's always wrong, no matter what, even in a case in which it would save an entire city or even a civilization from annihilation."
We go on like this through every single issue studied in the L&D study.
Now, if at that point, Nick, you say to me, "You know, you are a conservative & I’m a liberal, and based on our conversation, I'd have to say that conservatives are more prone than liberals to fit the facts to their ideology," I think I’m going to be a bit puzzled (and not just b/c of the small N).
"Didn’t you just agree with me on the facts of every policy we just discussed?" I ask. "I see we have different values; but given our agreement about the facts, what evidence is there even to suspect that my view of them is based on anything different from what your view is based on -- presumably the most defensible assessment of the evidence?"
But suppose you say to me instead, “Say, don't you find it puzzling that you never experience any sort of moral conflict -- that what's intrinsically 'good' or 'permissible' for you, ideologically speaking, always produces good consequences? Do you think it's possible that you might be fitting your empirical judgments to your values?" Then I think I might say, "well, that's possible, I suppose. Is there an experiment we can do to test this?"
I was thinking of experiments that do show that when I said, in my post, that the balance of the evidence is more in keeping w/ symmetry then asymmetry. Those experiments show that people who think the death penalty is intrinsically wrong tend to reject evidence that it deters -- just as people who think it's "right" tend to think that evidence it doesn't deter are unpersuasive. There are experiments, too, like the ones we've done ("Cultural Cognition of Scientific Consensus"; "They Saw a Protest"), in which we manipulate the valence of one and the same piece of evidence & find that people of opposing ideologies both opportunistically adjust the weight they assign that evidence. There are also many experiments connecting motivated reasoning to identity-protective cognition of all sorts (e.g, "They Saw a Game") -- and if identity-protective cognition is the source of ideologically motivated reasoning, too, it would be odd to find asymmetry.
So I think the L&D study-- an excellent study -- is relevant evidence & more consistent with asymmetry than symmetry. But it's not super strong evidence in that respect—and not strong enough to warrant “changing one’s mind” if one believes that the weight of the evidence otherwise is strongly in support of symmetry rather than asymmetry in motivated reasoning.
So tell, me, Dr. Nick—is my LR infected?