follow CCP

Recent blog entries
« Even more on motivated consequentialist reasoning | Main | Aken's worldview »
Wednesday
Aug292012

Doc., please level with me: is my likelihood ratio infected by my priors?!

In a previous post, I acknowledged that a very excellent study by Liu & Ditto had some findings in it that were supportive of the “asymmetry thesis”—the idea that motivated reasoning and like processes more heavily skew the factual judgments of “conservatives” than “liberals.” Still, I said that “there's just [so] much more valid & compelling evidence in support of the 'symmetry' thesis—that ideologically motivated reasoning is uniform ... across ideologies—” that I saw no reason to “substantially revise my view of the likelihood” that the asymmetry position is actually correct.

An evil genius named Nick asks:

So what (~) likelihood ratio would you ascribe to this study for the hypothesis that the asymmetry thesis does not exist? And how can we be sure that you aren't using your prior to influence that assessment? ….

You acknowledge Liu & Ditto’s findings do support the asymmetry thesis, yet you state, without much explanation, that you “don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.”

… One way to think about it is that your LR for the Liu & Ditto study as it relates to the asymmetry hypothesis should be ~ equal to the LR from a person who is completely ignorant (in an E.T. Jaynes sense) about the Cultural Cognition findings that bear on the hypothesis. It is, of course, silly to think this way, and certainly no reader of this blog would be in this position, but such ignorance would provide an ‘unbiased’ estimate of the LR associated with the study. [note that is amendable to empirical testing.]

You may have simply have been stating that your prior on the asymmetry hypothesis is so low that the LR for this study does not change your posterior very much. That is perfectly coherent but I would still be interested in what’s happening to your LR (even if its effect on the posterior is trivial).

Well, of course, readers can’t be sure that my priors (1,000:1 that the “asymmetry thesis” is false) didn’t contaminate the likelihood ratio I assigned to L&D’s finding of asymmetry in their 2nd study (0.75; resulting in revised odds that "asymmetry thesis is false" = 750:1).

Worse still, I can’t.

Obviously, to avoid confirmation bias, I must make an assessment of the LR based on grounds unrelated to my priors. That’s clear enough—although it’s surprising how often people get this wrong when they characterize instances of motivated reasoning as “perfectly consistent with Bayesianism” since a person who attaches a low prior to some hypothesis can “rationally” discount evidence to the contrary. Folks: that way of thinking is confirmation bias--of the conscious variety.

The problem is that nothing in Bayes tells me how to determine the likelihood ratio to attach to the new evidence. I have to “feed” Bayes some independent assessment of how much more consistent the new evidence is with one hypothesis than another. ("How much more consistent,” formally speaking, is “how many times more likely." In assigning an LR of 0.75 to L&D, I’m saying that it is 1.33 x more consistent with “asymmetry” than “symmetry”; and of course, I’m just picking such a number arbitrarily—I’m using Bayes heuristically here and picking numbers that help to convey my attitude about the weight of the evidence in question).

So even if I think I am using independent criteria to assess the new information, how do I know that I’m not unconsciously selecting a likelihood ratio that reflects my priors (the sort of confirmation bias that psychology usually worries about)? The question would be even more pointed in this instance if I had assigned L&D a likelihood ratio of 1.0—equally consistent with asymmetry and symmetry—because then I wouldn’t have had to revise my prior estimation in the direction of crediting asymmetry a tad more. But maybe I’m still assigning an LR to the study (only that one small aspect of it, btw) that is not as substantially below 1.0 as I should because it would just be too devestating a blow to my self-esteem to give up the view that the asymmetry thesis is false.

Nick proposes that I go out and find someone who is utterly innocent of the entire "asymmetry" issue and ask her to think about all this and get back to me with her own LR so I can compare. Sure, that’s a nice idea in theory. But where is the person willing to do this? And if she doesn’t have any knowledge of this entire issue, why should I think she knows enough to make a reliable estimate of the LR?

To try to protect myself from confirmation bias—and I really really should try if I care about forming beliefs that fit the best available evidence—I follow a different procedure but one that has the same spirit as evil Nick’s.

I spell out my reasoning in some public place & try to entice other thoughtful and reflective people to tell me what they think. If they tell me they think my LR has been contaminated in that way, or simply respond in a way that suggests as much, then I have reason to worry—not only that I’m wrong but that I may be biased.

Obviously this strategy depends (among other things) on my being able to recognize thoughtful and reflective people being thoughtful and reflective even when they disagree with me. I think I can.  Indeed, I make a point of trying to find thoughtful and reflective people with different priors all the time-- to be sure their judgment is not being influenced by confirmation bias when they assure me that my LR is “just right.”

Moreover, if I get people with a good enough mix of priors to weigh in, I can "simulate" the ideally "ignorant observer" that Nick conjures (that ignorant observer looks a lot like Maxwell's Demon, to me; the idea of doing Bayesian reasoning w/o priors would probably be a feat akin to violating the 2nd Law of Thermodynamics).

Nick the evil genius—and others who weighed in on the post to say I was wrong (not about this point but about another: whether L&D’s findings were at odds with Haidt & Graham’s account of the dispositions that motivate “liberals” and “conservatives”; I have relented and repented on that)—are helping me out in this respect!

But Nick points out that I didn’t say anything interesting about why I assigned such a modest LR to L&D on this particular point.  That itself, I think, made him anxious enough to tell me that he was concerned that I might be suffering from confirmation bias. That makes me anxious.

So, thank you, evil Nick! I will say more. Not because I really feel impelled to tussle about how much weight to assign L&D on the asymmetry point; I think and suspect they agree that it would be nice simply to have more evidence that speaks more directly to the point. But now that Nick is helping me out, I do want to say enough so that he (and any other friendly person out there) can tell me if they think that my prior has snuck through and inserted itself into my LR.

In the study in question, L&D report that subjects' “deontological” positions—that is, the positions they held on nonconsequenialist moral grounds—tended to correlate with their view of the consequences of various disputed policies (viz., “forceful interrogation,” “condom promotion” to limit STDs, “capital punishment,” and “stem cell research”).

They also found that this correlation—this tendency to conclude that what one values intrinsically just happens to correlate with the course of action that will produce the state of affairs—increases as one becomes more “conservative” (although they also found that the correlation was still significant even for self-described “liberals”). In other words, on the policies in questions, liberals were more likely to hold positions that they were willing to concede might not have desirable consequences.

Well, that’s evidence, I agree, that is more consistent with the asymmetry thesis—that conservatives are more prone to motivated reasoning—than are liberals.  But here's why I say it's not super strong evidence of that.

Imagine you and I are talking, Nick, and I say, "I think it is right to execute murderers, and in addition the death penalty deters." You say, "You know, I agree that the death penalty deters, but to me it is intrinsically wrong to execute people, so I’m against it.

I then say, "For crying out loud--let's talk about something else. I think torture can be useful in extracting information, & although it is not a good thing generally, it is morally permissible in extreme situations when there is reason to think it will save many lives. Agree?"  You reply, "Nope. I do indeed accept that torture might be effective in extracting information but it's always wrong, no matter what, even in a case in which it would save an entire city or even a civilization from annihilation."  

We go on like this through every single issue studied in the L&D study.

Now, if at that point, Nick, you say to me, "You know, you are a conservative & I’m a liberal, and based on our conversation, I'd have to say that conservatives are more prone than liberals to fit the facts to their ideology," I think I’m going to be a bit puzzled (and not just b/c of the small N).

"Didn’t you just agree with me on the facts of every policy we just discussed?" I ask. "I see we have different values; but given our agreement about the facts, what evidence is there even to suspect that my view of them  is based on anything different from what your view is based on -- presumably the most defensible assessment of the evidence?"

But suppose you say to me instead, “Say, don't you find it puzzling that you never experience any sort of moral conflict -- that what's intrinsically 'good' or 'permissible' for you, ideologically speaking, always produces good consequences? Do you think it's possible that you might be fitting your empirical judgments to your values?"  Then I think I might say, "well, that's possible, I suppose. Is there an experiment we can do to test this?"

I was thinking of experiments that do show that when I said, in my post, that the balance of the evidence is more in keeping w/ symmetry then asymmetry. Those experiments show that people who think the death penalty is intrinsically wrong tend to reject evidence that it deters -- just as people who think it's "right" tend to think that evidence it doesn't deter are unpersuasive. There are experiments, too, like the ones we've done ("Cultural Cognition of Scientific Consensus"; "They Saw a Protest"), in which we manipulate the valence of one and the same piece of evidence & find that people of opposing ideologies both opportunistically adjust the weight they assign that evidence. There are also many experiments connecting motivated reasoning to identity-protective cognition of all sorts (e.g, "They Saw a Game") -- and if identity-protective cognition is the source of ideologically motivated reasoning, too, it would be odd to find asymmetry.

So I think the L&D study-- an excellent study -- is relevant evidence & more consistent with asymmetry than symmetry. But it's not super strong evidence in that respect—and not strong enough to warrant “changing one’s mind” if one believes that the weight of the evidence otherwise is strongly in support of symmetry rather than asymmetry in motivated reasoning.

So tell, me, Dr. Nick—is my LR infected?

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

I feel honored to have a dedicated post directed at my comments.

I see the analogy you are trying to draw with your hypothetical, but I think it confuses aspects of our exchange. You are right to note that people with identical belief functions might still make different (policy) decisions on the basis of those identical belief functions. Bayes speaks to beliefs functions. Decision theory – an extension of Bayes – speaks to making decisions (for which values are relevant; as you note, values should not be relevant to the degree of updating a belief function undergoes – that might be confirmation bias). My discussion was not concerned with the decisions that result from belief functions. I was solely concerned with the functions themselves.

To answer your question, “is my LR infected?” I would say, based on your hypothetical, “yes, it is.” But I’d add that mine is as well. In fact, that is roughly my point: to the extent that anyone’s prior deviates from equipoise, there is a strong possibility that it will influence their assessment of the LR in one direction or another. Of course, this presupposes that there is an objective value of the LR. The question, then, is what should the LR be, or, more exactly, what is the objective value of the LR? This question (on ‘measuring confirmation’) has vexed philosophers of science for years, and I won’t repeat the arguments and issues here.

The solution I proposed is that the objective value of the LR can be inferred from a person whose prior is equipoise with respect to the hypothesis (not total ignorance; otherwise, you’re right, why would we care about this judgment? They’d be a neonate!). This solution is more plausible and less onerous than you might have assumed. For e.g., Dan Simon’s work on coherence-based reasoning did exactly this: (essentially) participants evaluate a piece of evidence in an insolated vignette. Participants later read about a legal case in which an equivalent piece of evidence was presented. Not surprisingly, the evidence that was ambivalent in isolation, became powerfully diagnostic in the context of a legal proceeding, and in a predictable way. The isolation condition can be understood to engender an equipoise prior. Carlson & Russo’s work on the Stepwise Evolution of Preferences is similar. And I think this is essentially what you did in They Saw a Protest (a personal favorite, I might add): by changing the context, you’ve essentially changed their priors, which they use to evaluate the evidence contained in the video.

My bottom line: I could not supply you with an objective value for the LR of the L&D study as it bears on the asymmetry hypothesis. I’m a human (not an evil spirit!) and my prior about the asymmetry hypothesis is not equipoise. Wouldn’t it be naive realism for me to assume that I’m not subject to the confirmation bias?

August 29, 2012 | Unregistered CommenterNick

Very well, Dr Nick, but our hypothetical exchange (the one in the post) was not meant to make any point about Bayesianism. It was meant to illustrate why I assign a "low" LR to L&D on asymmetry!

Again, what L&D show in study 2 is that liberals are more likely to accept that their position has "negative consequences" than conservatives are on a set of policies that also divide them on non-consequentialist grounds.

Well, the result "liberals are more likely to believe 'conserv facts' [e.g., death penalty deters; condom promotion will make teens have sex] than conservs are to believe 'liberal facts' [death penalty doesn't deter; condom availability won't increase incidence of teen sex]" is not very strong evidence that conservatives arrived at *their* facts [viz., death penalty deters; condom promotion will make teens have sex] on grounds that are any different from ones liberals who reached those same factual conclusions relied on.

I am just pointing out (in that part of the post) how I got to 0.75 w/o relying on my kmowledge of cultural cogntion experiments (at least not consciously; nothing more to say about the unconscious part). No one needs to have heard anthing about those experiments to see why I see L&D's study 2 as not suggesting much one way or other on the symmetry/asymmetry of motivated reasoning.

The experiments I advert to -- some by CP & some by others -- were the basis only of my 1,000:1 prior against asymmetry, but didn't figure at all in my discussion of the probative weight of L&D. If you start w/ "equal priors" (1:1), and then come upon L&D, you should then put the odds of asymmetry at about 4:3. But do read those experiments I mentioned!

August 29, 2012 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>