follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: Is critical reasoning domain independent or domain specific?... a fragment of an incomplete rumination | Main | Check out wild & crazy "coherence based reasoning"! Are rules of evidence "impossible"?, part 2 (another report from Law & Cognition seminar)m »
Wednesday
Nov252015

"Inherent internal contradictions" don't cause bad institutions to collapse; they just suck ... "Rules of evidence are impossible," part 3 (another report for Law & Cognition seminar)

Nope. Can't be done. Impossible.Time for part 3 of this series: Are Rules of Evidence Impossible?

The answer is yes, as I said at the very beginning.

But I didn’t say why & still haven’t.

Instead, I spent the first two parts laying the groundwork necessary for explanation.  Maybe you can build the argument on top of it yourself at this point?! If so, skip ahead to “. . . guess what?”—or even skip the rest of this post altogether & apply your reason to something likely to teach you something new!

But in the event you can’t guess the ending, or simply need your “memory refreshed” (see Fed. R. Evid. 612), a recap:

Where were we? In the first part, I described a conception of the practice of using “rules of evidence”—the Bayesian Cognitive Correction Model (BCCM). 

BCCM conceives of rules of evidence as instruments for “cognitively fine tuning” adjudication. By selectively admitting and excluding items of proof, courts can use the rules to neutralize the accuracy-diminishing impact of one or another form of of biased information processing--from identity-protective reasoning to the availability effect, from hindsight bias to baserate neglect, etc.  The threat these dynamics pose to accurate factfinding is their tendency to induce the factfinder to systematically misestimate the weight, or in Bayesian terms the “likelihood ratio” (LR), to be assigned items of proof (Kahan 2015). 

In part 2, I discussed a cognitive dynamic that has that sort of consequence: “coherence based reasoning” (CBR).

Monte carlo simulation of CBR! check it out!Under CBR (Simon 2004; Simon, Pham, Quang & Holyoak 2001; Carlson & Russo 2001), the factfinder’s motivation to find “coherence” in the trial proof creates a looping feedback effect. 

Once the factfinder forms the perception that the accumulated weight of the evidence supports one side, he begins to inflate or discount the weight of successive items of proof as necessary to conform them to that position.  He also turns around and revisits already-considered items of proof and reweights them to make sure they fit that position, too. 

His reward is an exaggerated degree of confidence in the correctness of that outcome—and thus the piece of mind that comes from never ever having to worry that maybe, just maybe he got the wrong answer.

The practical consequences are two.  First, by virtue of the exaggerated certainty the factfinder has in the result, he will sometimes rule in favor of a party that hasn’t carried its burden under a heightened standard of proof like, say, “beyond a reasonable doubt,” which reflects the law’s aversion to “Type 1” errors when citizens’ liberty is at stake.

Second, what position the factfinder comes to be convinced is right will be arbitrarily sensitive to the order of proof.  The same strong piece of evidence that a factfinder dismisses as inconsistent with what she is now committed to believing is true could have triggered a “likelihood ratio” cascade” in exactly the opposite direction had that item of proof appeared “sooner”-- in which case the confidence it instilled in its proponent's case would have infected the factfinder's evaluation of all the remaining items of proof.

If you hung around after class last time for the “extra credit”/“optional” discussion, I used a computer simulation to illustrate these chaotic effects, and to show why we should expect the accuracy-eviserating consequences of them to be visited disproportionately on innocent defendants in criminal proceedings.

This is definitely the sort of insult to rational-truth-seeking that BCCM was designed to rectify!

But guess what?

It can’t! The threat CBR poses to accuracy is one the BCCM conception of “rules of evidence” can’t possibly couneract!

As I explained in part 1, BCCM consists of three basic elements:

  1. Rule 401, understood as a presumption that evidence with LR ≠ 1 is admissible (Lempert 1977);

  2. a conception of “unfair prejudice” under Rule 403 that identifies it as the tendency of a piece of relevant evidence to induce a flesh-and-blood factfinder to assign incorrect LRs to it or other items of proof (Lempert 1977); and
       
  3. a strategy for Rule 403 weighing that directs the court to exclude “relevant” evidence when the tendency it has to induce the factfinder to assign the wrong LR to that or other pieces of evidence diminishes accurate assessment of the trial proof to a greater extent than constraining the factfinder to effectively treat the evidence in question as having no weight at all, or LR = 1 (Kahan 2010).

The problem is that CBR injects this “marginal probative value vs. marginal prejudice” apparatus with a form of self-contradiction, both logical and practical.

There isn’t normally any such contradiction. 

Imagine, e.g., that a court was worried that evidence of a product redesign intended to avoid a harmful malfunction might trigger “hindsight bias,” which consists in the tendency to inflate the LRs associated with items of proof that bear on how readily one might have been able to predict the need for and utility of such a design ex ante (Kamin & Rachlinski 1995).  (Such evidence is in theory—but not in practice— “categorically excluded” under Rule 407, when the correction was made after the injury to the plaintiff; but in any case, Rule 407 wouldn’t apply, only Rule 403 would, if the change in product design were made after injuries to third parties but before the plaintiff herself was injured by the original product—even though the same “hindsight bias” risk would be presented).

“All” the judge has to do in that case is compare the marginal accuracy-diminishing impact of [1] giving no weight at all to the evidence (LR = 1) on the "facts of consequence"  it should otherwise have made "more probable" (e.g, the actual existence of alternative designs and their cost-effectiveness) and [2] the inflationary effect of admitting it on the LRs assigned to the evidence bearing on every other fact of consequence (e.g., what a reasonable manufacturer would have concluded about the level of risk and feasibility of alternative designs at the time the original product was designed).

The BCCM conception of 403 "marginal probity vs. marginal prejudice" balancing! A thoughtful person might wonder about the capacity of a judge to make that determination accurately, particularly because weighing the “marginal accuracy diminishing impact” associated with admission and with exclusion, respectively,  actually requires the judge to gauge the relative strength of all the remaining evidence in the case. See Old Chief v. U.S., 519 U.S. 127, 182-85 (1997).

But making such a determination is not, in theory at least, impossible.

What is is doing this same kind of analysis when the source of the “prejudice” is CBR.  When a judge uses BCCM to manage the impact of hindsight bias (or any other type of dynamic inimical to rational information-processing), “marginal probative value” and “marginal prejudice”—the quantities she must balance—are independent.

But when the bias the judge is trying to contain is CBR, “marginal probative value” and “marginal prejudice” are interdependent—and indeed positively correlated.

What triggers the “likelihood ratio cascade” that is characteristic of CBR as a cognitive bias is the correct LR the factfinder assigned whatever item of proof induced the factfinder to form the impression that one side’s position was stronger than the other’s. Indeed, the higher (or lower) the “true” LR of that item of proof, the more confident the facftinder will be in the position that evidence supports, and hence the more biased the factfinder will thereafter be in assessment of the weight due other pieces of evidence (or equivalently, the more indifferent she'll become to the risk of erring in the direction of that position (Scurich 2012)).

To put it plainly, CBR creates a war between the two foundational “rules of evidence”: the more relevant evidence is under Rule 401 the more unfairly prejudicial it becomes for purposes of Rule 403.  To stave off the effects of CBR on accurate factfinding, the court would have to exclude from the case the evidence most integral to reaching an accurate determination of the facts.

Maybe an illustration would be useful?

This is one case plucked from the sort of simulation that I ran yesterday:

It shows how, as a result of CBR, a case that was in fact a “dead heat” can transmute into one in which the factfinder forms a supremely confident judgment that the facts supporting one side’s case The sad result of trying to do BCCM 403 balancing here...are “true.”

The source of the problem, of course, is that the very “first” item of proof had LR = 25, initiating a “likelihood ratio cascade” as reflected in the discrepancy between the "true" LRs—tLRs—and "biased" perceived LRs—pLRs—for each subsequent item of proof.

A judge applying the BCCM conception of Rule 403 would thus recognize that "item of proof No. 1" is injecting a huge degree of “prejudice” into the case. She should thus exclude proof item No. 1, but only if she concludes that doing so will diminish the accuracy of the outcome less than preventing the factfinder from giving this highly probative piece of evidence any effect whatsoever.

When the judge engages in this balancing, she will in fact observe that the effect of excluding that evidence distorts the accuracy of the outcome just as much as admitting it does--but in the opposite direction. In this simulated case, assigning item No. 1 an LR = 1—the formal effect of excluding it—now induces the factfinder to conclude that the odds against that party’s position being true are 5.9x10^2:1, or that that there is effectively a 0% chance that that party’s case is well-founded.

That’s because the very next item of proof has LR = 0.04 (the inverse of LR = 25), and thus triggers a form of “rolling confirmation bias” that undervalues every subsequent item of proof.

So if the judge were to exclude item No. 1 b/c of its tendency to excite CBR, she’d now face the same issue confronts her again in ruling on a motion to exclude item No. 2.

And guess what? If she assesses the impact of excluding that super probative piece of evidence (one that favored one party’s position 25x more than the other’s), she’ll again find that the “accuracy diminishing impact” of doing so is as high as not excluding: the remaining evidence in the case is configured so that the factfinder is impelled to a super-confident conclusion in favor of the first party once more!

And so forth and so on.

As this illustration should remind you, CBR also has the effect of making outcomes arbitrarily sensitive to the order of proof. 

Imagine item 1 and item 2 had been “encountered” in the opposite “order” (whether by virtue of the point at which they were introduced at trial, the relative salience of them to the factfinder as he or she reflected on the proof as a whole, or the role that post-trial deliberations had in determining the sequence with which particular items of proof were evaluated). 

The factfinder in that case would indeed have formed just as confident a judgment--but one in support of the opposite party:

Again, the judge will be confronted with the question whether the very “first” item of proof—what was item No. 2  in the last version of this illustration—should be excluded under Rule 403. When she works this out, moreover, she’ll end up discovering that Again, 403 balancing is impossible here--it is self-contradictory!the consequence of excluding it is the same as was the consequence of excluding item No. 1—LR = 25—in our alternative-universe version of the case: a mirror-image degree of confidence on the factfinder's part about the strength of the opposing party’s case.  And so  on and so forth.

See what’s going on?

The only way for the judge to assure that this case gets decided “accurately” is to exclude every single piece of evidence from the trial, remitting the jury to its priors—1:1—which, by sheer accident, just happened to reflect the posterior odds a “rational factfinder” would have ended up with after fairly assigning each piece of evidence its “true” LR.

Not much point having a trial at all under those circumstances!

Of course, the evidence, when properly considered, might have more decisively supported one side or the other.  But what a more dynamic simulation--one that samples from all the various distributions of case strength one cares to imagine-- shows us is that there’s still no guarantee the factfinder would have formed an accurate impression of the strength of the evidence in that cirucmstance either.

To assure an accurate result in such a cse, the judge, under the BCCM conception of the rules of evidence, would still have been obliged to try to deflect the accuracy-vitiating impact of CBR away from the factfinder’s appraisal of the evidence by Rule 403 balancing. 

And the pieces of evidence that the judge would be required in such a case to exclude would be the ones most entitled to be given a high degree of weight by a rational factfinder!  The impact of doing so would be to skew consideration of the remainder of the evidence without offsetting exclusions of similarly highly relevant pieces of proof. . . . 

Again, no point in even having  a trial if that’s how things are going to work. The judge should just enter judgment for the party she thinks “deserves” to win.

There is of course no reason to believe a judge could “cognitively fine-tune” a case with the precision that this illustration envisions.  But all that means is that the best a real judge can ever do will always generate an outcome that we have less reason to be confident is “right” than we would have had had the judge just decided the stupid case herself on the basis of her own best judgment of the evidence.

Of course, why should we assume the judge herself could make an accurate assessment, or reasonably accurate one, of the trial proof?  Won’t she be influenced by CBR too—in a way that distorts her capacity to do the sort of “marginal probative value vs. marginal prejudice” weighing that the BCCM conception of Rule 403 imagines?

If you go down this route, then you again ought to conclude that “rules of evidence are impossible” even without contemplating the uniquely malicious propensities of CBR.  Because if this is how you see things (Schauer 2006), there will be just as much reason to think that the judge’s performance of such balancing will be affected by all the other forms of cognitive bias that she is trying to counteract by use of BCCM’s conception of Rule 403 balancing.

I think that anxiety is in fact extravagant—indeed silly.

There is plenty of evidence that judges, by virtue of professionalization, develop habits of mind that reasonably insulate them from one or another familiar form of cognitive bias when the judges are making in-domain decisions—i.e., engaging in the sort of reasoning they are supposed to as judges (Kahan, Hoffman, et al. in press; Guthrie, Rachlinksi & Wistrich 2007) .

That’s how professional judgment works generally!

But now that I’ve reminded you of this, maybe you can see what the “solution” is to the “impossibility” of the rules of evidence?

Even a jurist with exquisite professional judgment cannot conceivably perform the kind of “cognitive fine-tuning” ‘envisioned by the “rules of evidence” -- the whole enterprise is impossible.

But what makes such fine tuning necessary in the first place is the law’s use of  non-professional decisionmakers divorced from any of the kinds of insights and tools that professional legal truthseekers would actually use.

Jurors aren’t stupid.  They are equipped with all the forms of practical judgment that they need to be successful in their everyday lives.

What's stupid is to think that making reliable assessments of fact in the artificial environment of a courtroom advesarial proceeding is one of the things everday life equips them to do. 

Indeed, it's absurd to think that that environment is conducive to the accurate determination of facts by anyone.

A procedural mechanism that was suited for accurately determining the sorts of facts relevant to legal determinations would have to look different from anything we see in in everyday life, b/c making those sorts of determinations isn't something that everyday life requires.

No more than than having to practice medicine, repair foreign automobiles, or write publicly accessible accounts of relativity is (btw, happy birthday Die Feldgleichungen der Gravitation).

Ordinary, sensible people rely on professionals -- those who dedicate themselves to acquiring expert knowledge and corresponding forms of reasoning proficiency -- to perform specialized tasks like these.

The “rules of evidence” are impossible because the mechanism we rely on to determine the “truth” in legal proceedings—an adversary system with lay factfinders—is intrinsically flawed. 

No amount of fine-tuning by “rules of evidence” will  ever make that system capable of delivering the accurate determinations of their rights and obligations that citizens of an enlightened democratic state are entitled to.

We need to get rid of the current system of adjudication and replace it with a professionalized system that avails itself of everything we know about how the world works, including how human beings reason and how they can be trained to reason when doing  specialized tasks.

And we need to replace, too, the system of legal scholarship that generates the form of expertise that consists in being able to tell  soothing, tranquilizing, narcotizing just-so stories about how well suited the “adversary system” would be for truth-seeking with just a little bit  more "cognitive fine tuining" to be implemented through the rules of evidence.

That element of our legal culture is as antagonistic to the goal of truth-seeking as any the myriad defects of the adversary system itself. . . .

The end!

References

Guthrie, C., Rachlinski, J.J. & Wistrich, A.J. Blinking on the bench: How judges decide cases. Cornell Law Rev 93, 1-43 (2007).

Kahan, D.M. The Economics—Conventional, Behavioral, and Political—of "Subsequent Remedial Measures" Evidence. Columbia Law Rev 110, 1616-1653 (2010).

Kahan, D.M., Hoffman, D.A., Evans, D., Devins, N., Lucci, E.A. & Cheng, K. 'Ideology'or'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment. U. Pa. L. Rev. 164 (in press).

Kahan, D.M. Laws of cognition and the cognition of law. Cognition 135, 56-60 (2015).

Kamin, K.A. & Rachlinski, J.J. Ex Post ≠ Ex Ante - Determining Liability in Hindsight. Law Human Behav19, 89-104 (1995).

Lempert, R.O. Modeling Relevance. Mich. L. Rev. 75, 1021-57 (1977).

Pennington, N. & Hastie, R. A Cognitive Theory of Juror Decision Making: The Story Model. Cardozo L. Rev. 13, 519-557 (1991).

Schauer, F. On the Supposed Jury-Dependence of Evidence Law. U. Pa. L. Rev. 155, 165-202 (2006).


Scurich, N. The Dynamics of Reasonable Doubt. (Ph.D. dissertation, University of Southern California, 2012). 

Simon, D. A Third View of the Black Box: Cognitive Coherence in Legal Decision Making. Univ. Chi. L.Rev. 71, 511-586 (2004).


Simon, D., Pham, L.B., E, Q.A. & Holyoak, K.J. The Emergence of Coherence over the Course of Decisionmaking. J. Experimental Psych. 27, 1250-1260 (2001).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

Dan,
Really nice article and analysis. Thanks.
I need to digest and understand all the details. This will take a while.
I think that there are strong neural correlates to what you are saying. Again, we will see.
I really like the collapse of the likelihood ratio, which seems to have deep resonance with what I am doing.
In interesting cases, thousands of likelihood ratios, some not apparently related, would all collapse at the same time. Their collapse would be expected to destabilize other parts of the system unrelated to the facts of the initial likelihood collapse.

November 27, 2015 | Unregistered CommenterEric Fairfield

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>