follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Does reliance on heuristic information processing predict religiosity? Yes, if one is a liberal, but not so much if one is a conservative . . . | Main | Law & Cognition 2016, Session 2 reading list & q's »
Wednesday
Sep072016

Law & Cognition 2016, Session 2 recap: Models--a start

1. Bayesian information processing (BIP). In BIP, the factfinder is treated as determining facts in a manner consistent with Bayes’s theorem. Bayes’s theorem specifies the logical process for combining or aggregating probabilistic assessments of some hypothesis. One rendering of the theorem is prior odds x likelihood ratio = posterior odds. “Prior odds” refer to one’s initial or current assessment, and “posterior odds” one’s revised assessment, of the likelihood of the proposition. The “likelihood ratio” is how much more consistent a piece of information or evidence is with the hypothesis than with the negation of the hypothesis. By way of illustration:

Prior odds. My prior assessment that Lance Headstrong used performance-enhancing drugs is 0.01 or 1 chance in 100 or 1:99.

Likelihood ratio. I learn that Headstrong has tested positive for performance-enhancing drug use. The test is 99% accurate. Because 99 of 100 drug users, but only 1 of 100 nonusers, would test positive, the positive drug test is 99 times more consistent with the hypothesis that Headstrong than with the contrary hypothesis (i.e., that he did not).

Posterior odds. Using Bayes theorem, I now estimate that the likelihood Headstrong used drugs is 1:99 x 99 = 99:99 = 1:1 = 50%. Why? Imagine we took 10,000 people, 100, or 1%, of whom we knew used performance-enhancing drugs and 9,900 of whom, 99%, we knew had not. If we tested all of them, we’d expect 99 of the users to test positive (0.99 x 100), and 99 nonusers (0.01 x. 9,900) to test positive as well. If all we knew was that a particular individual in the 10,000 tested positive, we would know that she was either one of 99 “true positives” or one of the 99 “false positives.”  Accordingly, we’d view the probability that he or she was a true user as being 50%.

In practical terms, you can think of the likelihood ratio as the weight or probative force of a piece of evidence.  Evidence that supports a hypothesis will have a likelihood ratio greater than one; evidence that contradicts a hypothesis will have a likelihood ratio less than one (but still greater than zero). When the likelihood ratio associated with a piece of information equals one, that information is just as consistent with the hypothesis as it is with the negation of the hypothesis; or in practical terms, it is irrelevant.

Fig. 1. BIP. Under BIP, the decisionmaker combines his or her existing estimation with new information in the manner contemplated by Bayes’s theorem—that is, by multiplying the former (expressed in odds) by the likelihood ratio associated with the latter and treating the product as his or her new estimate. Note that the value of the prior odds for the hypothesis and the likelihood ratio for the new evidence are presupposed by Bayes’s theorem, which merely instructs the decisionmaker how to combine the two.

2. Confirmation bias (CB). CB refers to a tendency to selectively credit or dismiss new evidence in a manner supportive of one’s existing beliefs. Accordingly, when displaying CB, a person who considers the probative value of new evidence is precommitted to assigning to new information a likelihood ratio that “fits” his or her prior odds—that is, a likelihood ratio that is greater than one if he or she currently thinks the hypothesis is true or a likelihood ratio that is either one or less than one if he or she currently thinks the hypothesis is false. So imagine I believe the odds are 100:1 that Headstrong used steroids. You tell me that Headstrong was drug tested and ask me if I’d like to know the result, and I say yes. If you tell me that he tested positive, I will assign a likelihood ratio of 99 to the test (because it has an accuracy rate of 0.99), and conclude the odds are therefore now 9900:1 that Headstrong used drugs. However, if you tell me that Headstrong tested negative, I will conclude that you are a very unreliable source of information, assign your report of the test results a likelihood ratio of 1, and thereby persist in my belief that the likelihood Headstrong is a user is 100:1. Note that CB is not contrary to BIP, which has nothing to say about what the likelihood ratio is associated with a piece of information. But unless a person has a means of determining the likelihood ratio for new evidence that is independent of his or her priors, that person will never correct a mistaken estimation—even if he or she is supplied with copious amounts of evidence and religiously adheres to BIP in assessing it.

 

Fig. 2. CB. “Confirmation bias” can be thought of as a reasoning process in which the decisionmaker determines the likelihood ratio for new evidence in a manner that reinforces (or at least does not diminish) his or her prior odds. Such a person can still be seen to be engaged in Bayesian updating, but since new information is always given an effect consistent with what he or she already believes, the decisionmaker will not correct a mistaken estimate, no matter how much evidence the person is supplied.

3. Story telling model (STM) & motivated reasoning (MR). Using the BIP framework, one can understand STM and MR as supplying a person’s prior odds (another thing simply assumed rather than calculated by BIP) and as determining the likelihood ratio to be assigned to evidence.  For example, if I am induced to select the “opportunistic, amoral cheater who will stop at nothing” story template, I might start with a very strong suspicion—prior odds of 99:1—that Headstrong used performance-enhancing drugs and thereafter construe pieces of evidence in a manner that supports that conclusion (that is, as having a likelihood ratio greater than one). If Headstrong is a member of a rival of my own favorite team, identity-protective cognition might exert the same impact on my cognition. Alternatively, if Headstrong is a member of my favorite team, or if I am induced to select the “virtuous hero envied by less talented and morally vicious competitors” template, then I might start with a strong conviction that Headstrong is not a drug user (prior odds of 1:99), and construe any evidence to the contrary as unentitled to weight (likelihood ratio of 1 or less than 1).  

It is possible, too, that STM and MR work together.  For example, identity-protective cognition might induce me to select a particular story template, which then determines my priors and shapes my assignment of likelihood ratios. If STM and MR, individually or in conjunction, operate in this fashion, then a person under the influence of either or both will reason in exactly the same manner as CB, for in that case, his or her priors and his or her likelihood-ratio assessments will arise from a common cause (cf. Kahan, Cultural Cognition of Consent).

Fig. 3. STM & MR. STM and MR can be understood as determinants of the decisionmakers’ prior odds and of the likelihood ratio he or she assigns to new evidence. They might operate independently (left) or in conjunction with one another (right; other complimentary relations are possible, too). In this model, the decisionmaker will appear to display confirmation bias, since the prior odds and likelihood ratio have a common cause. 

4.  What else? As we encounter additional mechanisms of cognition, consider how they relate to these “models.”

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>