follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Thursday
Sep202018

Help wanted--to identify cognitive bias at work in peoples' preferences for where plastic should be extracted from the ocean

There’s an interesting puzzle being debated over on the blog of  former Freud expert & current stats legend Andrew Gelman.  The question (posed by guest blogger Phil) is why people who are concerned about plastic deposits in the ocean seem to prefer removal schemes that operate remote from the source, notwithstanding the greater efficiency of source-based removal. 

Presumably one cognitive bias or another is at work—but what exactly is the nature of this mental miscue?

It struck me that the 13 billion readers of this blog would be well situated to help answer this question.

So have at it.

But note this one proviso: in addition to identifying the responsible bias and explaining how it works, suggest (in broad outline form) an empirical test one could perform to verify the posited account.

The problem of fish choking on plastic in the ocean is bad enough. We don’t need to make things worse by drowning reflective people in a sea of just-so stories.

Tuesday
Sep182018

Civic-epistemic virtues in the Risk Regulation Republic

From a recent lecture, this one at Texas Tech, in Lubbock Texas (Slides here):

My goal is to present evidence on the mental dispositions necessary for enlightened self-government in a risk-regulation republic

By a “risk regulation republic,” I mean a regime that is charged with using the best scientific evidence at its disposal to protect its citizens from all manner of hazards—from environmental ones, like climate change; to public health ones, like infection by the Zika virus; to social ones, like crime victimization or financial poverty.

Because the risk-regulation republic is democratic, its success in attaining these ends will depend in part on its citizens’ capacity to recognize such evidence. What kinds of mental dispositions—call them the civic epistemic virtues—does that capacity require?

For over two decades, the answer has been assumed to be one or another form of civic science literacy. As a theoretical construct, “civic science literacy” consists in knowledge of certain foundational scientific findings (e.g., human beings evolved from other species of animals; the Earth revolves around the Sun rather than vice versa), along with a set of critical reasoning skills that enable citizens’ to enlarge their stock of scientific knowledge and to bring it to bear on risk-regulation and other policy issues.

This position, I’ll argue, is incomplete.  Indeed, it is dangerously incomplete: for unless civic science literacy is accompanied by another science-reasoning disposition, the widespread attainment of  knowledge and reasoning skills that civic science literacy comprises can actually impede public engagement with the best available evidence—and deepen predictable, baleful forms of cultural polarization over what science knows. 

The additional disposition that's needed to orient civic science literacy is science curiosity.

The position that enlightened self-government requires science curiosity is definitely not new. Dewey saw science curiosity as an indispensable civic-epistemic virtue.  He was right, although not merely because curiosity motivates knowledge acquisition and activates information processing essential to its use—Dewey’s central points. 

What makes science curiosity an civic-epistemic virtue in the risk regulation republic is the role this disposition can play in quieting the defensive, identity-protective forms of cognition that turn science comprehension into a barrier rather than an entryway to public recognition of the best available evidence on societal risks.

Monday
Sep172018

Some reflections/admonitions on graphic reporting of data

A recent instructional lecture delivered at the Annenberg Public Policy Center. Slides here.

Boo!:


Yay!:

 ooooo!

ahhhhhh!


Thursday
Sep062018

Return of the chick sexers . . .

A repeat, but one that warrants repeating at this time of year . . . .


Okay, here’s a set of reflections that seem topical as another school year begins.

The reflections can be structured with reference to a question:

What’s the difference between a lawyer and a chick sexer?

It’s not easy, at first, to figure out what they have in common.  But once one does, the risk that one won’t see what distinguishes them is much bigger, in actuarial and consequential terms.

I tell people about the link between them all the time—and they chuckle.  But in fact, I spend hours and hours and hours per semester eviscerating comprehension of the critical distinction between them in people who are filled with immense intelligence and ambition, and who are destined to occupy positions of authority in our society.

That fucking scares me.

Anyway, the chick sexer is the honey badger of cognitive psychology: relentlessly fascinating, and adorable. But because cognitive psychology doesn’t have nearly as big a presence on Youtube as do amusing voice-overs of National Geographic wildlife videos, the chick sexer is a lot less famous. 

So likely you haven’t heard of him or her.

But in fact the chick sexer plays a vital role in the poultry industry. It’s his or her responsibility to separate the baby chicks, moments after birth, on the basis of gender.

The females are more valuable, at least from the point of view of the industry. They lay eggs.  They are also plumper and juicier, if one wants to eat them. Moreover, the stringy scrawny males, in addition to being not good for much, are ill-tempered & peck at the females, steal their food, & otherwise torment them.

So the poultry industry basically just gets rid of the males (or the vast majority of them; a few are kept on and lead a privileged existence) at soonest opportunity—minutes after birth.

The little newborn hatchlings come flying (not literally; chickens can’t fly at any age) down a roomful of conveyor belts, 100’s per minute. Each belt is manned (personed) by a chick sexer, who deftly plucks (as in grabs; no feathers at this point) each chick off the belt, quickly turns him/her over, and in a split second determines the creature’s gender, tossing the males over his or her shoulder into a “disposal bin” and gently setting the females back down to proceed on their way.

They do this unerringly—or almost unerringly (99.99% accuracy or whatever).

Which is astonishing. Because there’s no discernable difference, or at least one that anyone can confidently articulate, in the relevant anatomical portions of the minutes-old chicks.

You can ask the chick sexer how he or she can tell the difference.  Many will tell you some story about how a bead of sweat forms involuntarily on the male chick beak, or how he tries to distract you by asking for the time of day or for a cigarette, or how the female will hold one’s gaze for a moment longer or whatever. 

This is all bull/chickenshit. Or technically speaking, “confabulation.”

Indeed, the more self-aware and honest members of the profession just shrug their shoulders when asked what it is that they are looking for when they turn the newborn chicks upside down & splay their little legs.

But while we don’t know what exactly chicksexers are seeing, we do know how they come to possess their proficiency in distinguishing male from female chicks: by being trained by a chick-sexing grandmaster.

For hours a day, for weeks on end, the grandmaster drills the aspiring chick sexers with slides—“male,” “female,” “male,” “male,” “female,” “male,” “female,” “female”—until they finally acquire the same power of discernment as the grandmaster, who likewise is unable to give a genuine account of what that skill consists in.

This is a true story (essentially).

But the perceptive feat that the chick sexer is performing isn’t particularly exotic.  In fact, it is ubiquitous.

What the chick sexer does to discern the gender of chicks is an instance of pattern recognition.

Pattern recognition is a cognitive operation in which we classify a phenomenon by rapidly appraising it in comparison to large stock of prototypes acquired by experience.

The classification isn’t made via conscious deduction from a set of necessary and sufficient conditions but rather tacitly, via a form of perception that is calibrated to detect whether the object possesses a sufficient number of the prototypical attributes—as determined by a gestalt, “critical mass” intuition—to count as an instance of it.

All manner of social competence—from recognizing faces to reading others emotions—depend on pattern recognition.

But so do many do specialized ones. What distinguishes a chess grandmaster from a modestly skilled amature player isn’t her capacity to conjure and evaluate a longer sequence of potential moves but rather her ability to recognize favorable board positions based on their affinity to a large stock of ones she has determined by experience to be advantageous.

Professional judgment, too, depends on pattern recognition.

For sure, being a good physician requires the capacity and willingness to engage in conscious and unbiased weighing of evidence diagnostic of medical conditions. But that’s not sufficient; unless the doctor includes only genuinely plausible illnesses in her set of maladies worthy of such investigation, the likelihood that she will either fail to test for the correct, one fail to identify it soon enough to intervene effective, will be too low.

Expert forensic auditors must master more than the technical details of accounting; they must acquire a properly calibrated capacity to recognize the pattern of financial irregularity that helps them to extract evidence of the same from mountains of business records.

The sort of professional judgment one needs to be a competent lawyer depends on a properly calibrated capacity for pattern recognition, too.

Indeed, this was the key insight of Karl Llewellyn.  The most brilliant member of the Legal Realist school, Llewellyn observed that legal reasoning couldn’t plausibly be reduced to deductive application of legal doctrines. Only rarely were outcomes uniquely determined by the relevant set of formal legal materials (statutes, precedents, legal maxims, and the like).

Nevertheless, judges and lawyers, he noted, rarely disagree on how particular cases should be resolved. How this could be fascinated him!

The solution he proposed was professional “situation sense”: a perceptive faculty, acquired by education and experience, that enabled lawyers to reliably appraise specific cases with reference to a stock of prototypical “situation types,” the proper resolution of which that was governed by shared apprehensions of “correctness” instilled by the same means.

This feature of Llewellyn’s thought—the central feature of it—is weirdly overlooked by many scholars who characterize themselves as “realists” or New Realists,” and who think that Llewellyn’s point was that because there’s no “determinacy” in “law,” judges must be deciding on the basis of “political” sensibilities of the conventional “left-right” sort, generating differences in outcome across judges of varying ideologies. 

It’s really hard to get Llewellyn more wrong than that!

Again, his project was to identify how there could be pervasive agreement among lawyers and judges on what the law is despite its logical indeterminacy. His answer was that members of the legal profession, despite heterogeneity in their “ideologies” politically understood, shared a form of professionalized perception—“situation sense”—that by and large generated convergence on appropriate outcomes the coherence of which would befuddle non-lawyers.

Llewellyn denied, too, that the content of situation sense admitted of full specification or articulation. The arguments that lawyers made and the justifications that judges give for their decisions, he suggested, were post hoc rationalizations.  

Does that mean that for Lewellyn, legal argument is purely confabulatory? There are places where he seems to advance that claim.

But the much more intriguing and I think ultimately true explanation he gives for the practice of reason-giving in lawyerly argument (or just for lawyerly argument) is its power to summon and focus “situation sense”: when effective, argument evokes both apprehension of the governing “situation” and motivation to reach a situation-appropriate conclusion.

Okay. Now what is analogous between lawyering and chick-sexing should be readily apparent.

The capacity of the lawyer (including the one who is a judge) to discern “correct” outcomes as she grasps and manipulates indeterminate legal materials is the professional equivalent of—and involves the exercise of the same cognitive operation as—the chicksexer’s power to apprehend the gender of the day-old chick from inspection of its fuzzy, formless genetalia.

In addition, the lawyer acquires her distinctive pattern-recognition capacity in the same way the chick sexer acquires his: through professional acculturation.

What I do as a trainer of lawyers is analogous to what the chicksexer grandmaster does.  “Proximate causation,” “unlawful restraint of trade,” “character propensity proof/permissible purpose,” “collateral (not penal!) law”—“male,” “male,” “female,” “male”: I bombard my students with a succession of slides that feature the situation types that stock the lawyer’s inventory, and inculcate in students the motivation to conform the results in particular cases to what those who practice law recognize—see, feel—to be the correct outcome.

It works. I see it happen all the time. 

It’s quite amusing. We admit students to law school in large part because of their demonstrated proficiency in solving the sorts of logic puzzles featured on the LSAT. Then we torment them, Alice-in-Wonderland fashion, by presenting to them as “paradigmatic” instances of legal reasoning outcomes that clearly can’t be accounted for by the contorted simulacra of syllogistic reasoning that judges offer to explain them. 

They stare uncomprehendingly at written opinions in which a structural ambiguity is resolved one way in one statute and the opposite way in another--by judges who purport to be following the “plain meaning” rule.

They throw their hands up in frustration when judges insist that their conclusions are logically dictated by patently question-begging standards  (“when the result was a reasonably foreseeable consequence of the defendant’s action. . .  “) that can be applied only on the basis of some unspecified, and apparently not even consciously discerned, extra-doctrinal determination of the appropriate level of generality at which to describe the relevant facts.

But the students do learn—that the life of the law is not “logic” (to paraphrase, Holmes, a proto-realist) but “experience,” or better, perception founded on the “experience” of becoming a lawyer, replete with all the sensibilities that being that sort of professional entails.

The learning is akin to the socialization process that the students all experienced as they negotiated the path from morally and emotionally incompetent child to competent adult. Those of us who are already socially competent model the right reactions for them in our own reactions to the materials—and in our reactions to the halting and imperfect attempts of the students to reproduce it on their own. 

“What,” I ask in mocking surprise, “you don’t get why these two cases reached different results in applying the ‘reasonable foreseeability’ standard of proximate causation?” 

Seriously, you don’t see why, for an arsonist to be held liable for causing the death of firefighters, it's enough to show that he could ‘reasonably foresee’ 'death by fire,' whether or not he could foresee  ‘death by being trapped by fires travelling the particular one of 5x10^9 different paths the flames might have spread through a burning building'?! But why ‘death by explosion triggered by a spark emitted from a liquid nitrate stamping machine when knocked off its housing by a worker who passed out from an insulin shock’—and not simply 'death by explosion'—is what must be "foreseeable" to a manufacturer (one warned of explosion risk by a safety inspector) to be convicted for causing the death of employees killed when the manufacturer’s plant blew up? 

"Anybody care to tell Ms. Smith what the difference is,” I ask in exasperation.

Or “Really,” I ask in a calculated (or worse, in a wholly spontaneous, natural) display of astonishment,

you don’t see why somoene's ignorance of what's on the ‘controlled substance’ list doesn’t furnish a "mistake of law" defense (in this case, to a prostitute who hid her amphetamines in tin foil wrap tucked in her underwear--is that where you keep your cold medicine or ibuprofen! Ha ha ha ha ha!!), but why someone's ignorance of the types of  "mortgage portfolio swaps" that count as loss-generating "realization events" under IRS regs (the sort of tax-avoidance contrivance many of you will be paid handsomely by corporate law form clients to do) does furnish one? Or why ignorance of the criminal prohibition on "financial structuring" (the sort of strategem a normal person might resort to to hide assets from his spouse during a divorce proceeding) furnishes a defense as well?!

Here Mr. Jones: take my cellphone & call your mother to tell her there’s serious doubt about your becoming a lawyer. . . .

This is what I see, experience, do.  I see my students not so much “learning to think” like lawyers but just becoming them, and thus naturally seeing what lawyers see.

But of course I know (not as a lawyer, but as a thinking person) that I should trust how things look and feel to me only if corroborated by the sort of disciplined observation, reliable measurement, and valid causal inference distinctive of empirical investigation.

So, working with collaborators, I design a study to show that lawyers and judges are legal realists—not in the comic-book “politicians in robes” sense that some contemporary commentators have in mind but in the subtle, psychological one that Llewellyn actually espoused.

Examining a pair of genuinely ambiguous statutes, members of the public predictably conform their interpretation of them to outcomes that gratify their partisan cultural or political outlooks, polarizing in patterns the nature of which are dutifully obedient to experimental manipulation of factors extraneous to law but very relevant indeed to how people with those outlooks think about virtue and vice.

But not lawyers and judges: they converge on interpretations of these statutes, regardless of their own cultural outlooks and regardless of experimental manipulations that vary which outcome gratifies those outlooks.

They do that not because, they, unlike members of the public, have acquired some hyper-rational information-processing capacity that blocks out the impact of “motivated reasoning”: the lawyers and judges are just as divided as members of the public, on the basis of the same sort of selective crediting and discrediting of evidence, on issues like climate change, and legalization of marijuana and prostitution.

Rather the lawyers and judges converge because they have something else that members of the public don’t: Llewellyn’s situation sense—a professionalized form of perception, acquired through training and experience, that reliably fixes their attention on the features of the “situation” pertinent to its proper legal resolution and blocks out the distracting allure of features of it that might be pertinent to how a non-lawyer—i.e., a normal person, with one or another kind of “sense” reliably tuned to enabling them to be a good member of a cultural group on which their status depends . . . .

So, that’s what lawyers and chick sexers have in common: pattern recognition, situation sense, appropriately calibrated to doing what they do—or in a word professional judgment.

But now, can you see what the chick sexer and the lawyer don’t have in common?

Perhaps you don’t; because even in the course of this account, I feel myself having become an agent of the intoxicating, reason-bypassing process that imparting “situation sense” entails.

But you might well see it—b/c here all I’ve done is give you an account of what I do as opposed to actually doing it to you.

We know something important about the chick sexer’s judgment in addition to knowing that it is an instance of pattern recognition: namely, that it works.

The chick sexer has a mission in relation to a process aimed at achieving a particular end.  That end supplies a normative standard of correctness that we can use not only to test whether chick sexers, individually and collectively, agree in their classifications but also on whether they are classifying correctly.

Obviously, we’ll have to wait a bit, but if we collect rather than throw half of them a way, we can simply observe what gender the baby chicks classified by the sexer as “male” and “female” grow up to be.

If we do that test, we’ll find out that the chick sexers are indeed doing a good job.

We don’t have that with lawyers’ or judges’ situation sense.  We just don’t.

We know they see the same thing; that they are, in the astonishing way that fascinated Llewellyn, converging in their apprehension of appropriate outcomes across cases that “lay persons” lack the power to classify correctly.

But we aren’t in a position to test whether they are seeing the right thing.

What is the goal of the process the lawyers and judges are involved in?  Do we even agree on that?

I think we do: assuring the just and fair application of law.

That’s a much more general standard, though, than “classifying the gender of chicks.”  There are alternative understandings of “just” and “fair” here.

Actually, though, this is still not the point at which I’m troubled.  Although for sure I think there is heterogeneity in our conceptions of the “goals” that the law aims at, I think they are all conceptions of a liberal political concept of “just” and “fair,” one that insists that the state assume a stance of neutrality with respect to the diverse understandings of the good life that freely reasoning individuals (or more accurately groups of individuals) will inevitably form.

But assuming that this concept, despite its plurality of conceptions, has normative purchase with respect to laws and applications of the same (I believe that; you might not, and that’s reasonable), we certainly don’t have a process akin to the one we use for chick sexers to determine whether lawyers and judges’ situation sense is genuinely calibrated to achieving it.

Or if anyone does have such a process, we certainly aren’t using it in the production of legal professionals.

To put it in terms used to appraise scientific methods, we know the professional judgment of the chick sexer is not only reliable—consistently attuned to whatever it is that appropriately trained members of their craft are unconsciously discerning—but also valid: that is, we know that the thing the chick sexers are seeing (or measuring, if we want to think of them as measuring instruments of a special kind) is the thing we want to ascertain (or measure), viz., the gender of the chicks.

In the production of lawyers, we have reliability only, without validity—or at least without validation.  We do successfully (remarkably!) train lawyers to make out the same patterns when they focus their gaze at the “mystifying cloud of words” that Cardozo identified the law as comprising. But we do nothing to assure that what they are discerning is the form of justice that the law is held forth as embodying.

Observers fret—and scholars using empirical methods of questionable reliability and validity purport to demonstrate—that judges are mere “politicians in robes,” whose decisions reflect the happenstance of their partisan predilections.

That anxiety that judges will disagree based on their “ideologies” bothers me not a bit.

What does bother me—more than just a bit—is the prospect that the men and women I’m training to be lawyers and judges will, despite the diversity of their political and moral sensibilities, converge on outcomes that defy the basic liberal principles that we expect to animate our institutions.

The only thing that I can hope will stop that from happening is for me to tell them that this is how it works.  Because if it troubles me, I have every reason to think that they, as reflective decent people committed to respecting the freedom & reason of others, will find some of this troubling too.

Not so troubling that they can’t become good lawyers. 

But maybe troubling enough that they won't stop being reflective moral people in their careers as lawyers; troubling enough so that if they find themselves in a position to do so, they will enrich the stock of virtuous-lawyer prototypes that populate our situation sense  by doing something  that they, as reflective, moral people—“conservative” or “liberal”—recognize is essential to reconciling being a “good lawyer” with being a member of a profession essential to the good of a liberal democratic regime.

That can happen, too.

Thursday
Jun282018

Is the perverse effect of AOT on political polarization confounded by a missing variable? Nah.

Interesting paper on "actively open-minded thinking" (AOT) and polarization of climate change beliefs:

Stenhouse, N., Myers, T.A., Vraga, E.K., Kotcher, J.E., Beall, L. & Maibach, E.W. The potential role of actively open-minded thinking in preventing motivated reasoning about controversial science. Journal of Environmental Psychology 57, 17-24 (2018).

As Jon Corbin & I did (A note on the perverse effects of actively open-minded thinking on climate-change polarization. Research & Politics 3 (2016), available at https://doi.org/10.1177/205316801667670), & notwithstanding a representation in the research "highlights," the study finds no evidence that AOT reduces political polarization over human-caused climate change. Also consistent with our findings, the study (according to the lead author in correspondence; the paper is ambiguous on this point) also found that AOT interacts with ideology, a relationship that generates the "perverse effect" that Jon & I reported.

Nevertheless, the authors of this paper purport to identify “significant problems with” Jon & my paper:

Specifically, we focus on the lack of a measure of scientific knowledge, or the interaction of scientific knowledge with political ideology, in their regression model. This is a problem because Kahan's own research (Kahan et al., 2012) has suggested that the interaction between scientific knowledge and ideology is an important influence on views on climate change, with higher scientific knowledge being associated with greater perceived risk for liberals, but lower perceived risk for conservatives.

“Controlling” for scientific literacy, the authors contend, vitiates the interaction between AOT and political outlooks.

Well, I decided to redo the analysis from Jon & my paper after plugging in the predictor for the Ordinary Science Intelligence scale (“scicomp_i”) and a cross-product interaction for AOT and OSI.  Nothing changed in relation to our finding that AOT interacts with ideology (“crxsc”), generating the “perverse effect” of increased polarization as AOT scores go up (the data set for the study is posted here and makes checking this out very easy).  So that “significant problem” with our anlaysis turns out not to be one.

No idea why the observed interaction disappeared for Stenhouse et al.  We are in the process of examining each other’s datasets to try to figure out why.

Stay tuned.

Tuesday
Jun262018

Reflections on "System 2 bias"--part 2 of 2


Part 2 of 2 of refletions on Miller & Sanjuro. Part 1 here.

So “yesterday”™ I presented some reflections on what I proposed calling “System 2 bias” (S2b).

As I explained, good System 2 reasoning in fact depends on intuitions calibrated to perceive a likely System 1 error and to summon the species of conscious, effortful information processing necessary to avoid such a mistake.

S2b occurs when one of those well trained  intuitions misfires.  Under its influence, a normally strong reasoner will too quickly identify and correct a judgment he or she mistakenly attributes to over-reliance on system 1, heuristic reasoning. 

As such, S2b will have two distinctive features.  One is that it will be made, paradoxically, much more readily by proficient reasoners, who possess a well stocked inventory of System 2-enabling intuitions, than by nonproficient ones, who don’t. 

The other is that reasoners who display this distinctive form of biased information processing will strongly resist the correction of it. The source of their mistake is a normally reliable intuition essential to seeing that a particular species of judgment is wrong or fallacious.  It is in the nature of all reasoning intuitions that they provoke a high degree of confidence that one’s perception of a problem and one’s solution to it are correct. It is the absence or presence of that feeling that tells a reasoner when to turn on his or her capacity for conscious, effortful information processing, and when to turn it off and move on.

I suggested that S2b was at the heart of the Miller-Sanjurjo affair.  Under the influence of S2b, GVT and others too quickly endorsed—and too stubbornly continue to defend—an intuitively pleasing but flawed analytical method for remedying patterns of thought that they believe reflect the misidentification of independent events (successes in basketball shots) as interdependent ones.

But this account is a product of informed conjecture only.  We should try to test it, if we can, by experiments that attempt to lure strong reasoners into the signature errors of S2b.

This is where the “Margolis” (1996, pp. 53f; a problem identified, helpfully, by Josh Miller as an adaptation of “Bertrand’s Box paradox”) comes in.

The right answers to “a,” “b,” and “c” are in fact “67%-67%-67%.” (If you are scratching your head on this, then realize that there are twice as many ways to get red if one selects the red-red chip than if one selects the blue-red one; accordingly, if one is picking from a vessel with red-red and red-blue, “red side up” will come twice as often for the red-red chip as it will for the red-blue one”…. Or realize that if you answered “67%” for “c,” then logically it must be 67% for “a” and “b” as well—for it surely doesn’t matter for purposes of “c” which color the slected chip displays…).

But “50%-50%-67%” is an extremely seductive “lure.”  We might predict then, that as reasoning proficiency increases, study subjects will become progressively more and more likely to pick “67%-67%-67%” rather than “50%-50%-67%.”

But that’s not what we see!

In fact, the likelihood of “50%-50%-67%” increases steadily as one’s Cognitive Reflection Test score increases.  In other words, one has to be pretty smart even to take the bait in the “Margolis”/“Bertrand Box Paradox” problem.  Those who score low on CRT are in fact all over the map: “33%-33%-33%,” “50%-50%-50%,” etc. are all more common guesses for subjects with low CRT scores than is “67%-67%-67%).”

Hence, we have an experimental model here of how “System 2 bias” works, one that demonstrates that certain types of error are more likely, not less, as cognitive proficiency increases.  For more of the same, see Peters et. al 2006, 2018)

This is a finding, btw, that has important implications for using the Margolis/Bertrand question as part of a standardized cognitive-proficiency assessment.  In short, either one shouldn’t use the item, b/c it has a negative correlation with performance of the remaining assessment items, or one should use the “wrong answer” as the right one for measuring the target reasoning disposition, since in fact getting the wrong answer is a better indicator of that disposition than is getting the right one.

As I said, the other signature attribute of this bias is how stubbornly those who display System 2 bias cling to the wrong answers it begats.  There is anecdotal evidence for this in Margolis (1996, pp. 53-56), which corresponds nicely to my own experience in trying to help those high in cognitive proficiency to see the “right” answer to this problem. Also, consider how many smart people tried to dismiss M&S when Gelman first freatured this M&S on his blog.

But it would be pretty cool to have an experimental proof of this aspect to the problem, too.  Any ideas anyone?

In any event, here you go: an example of an “S2b” problem where being smart correlates negatively with the right answer.

It’s not a knock down proof that S2b explains the opposition to the Miller-Sanjurjo proof.  But it’s at least a “brick’s worth” of evidence to that effect.

References

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, IL, 1996).

Miller, Joshua B. and Sanjurjo, Adam, Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers, Econometrica (2018). Available at SSRN: https://ssrn.com/abstract=2627354 or http://dx.doi.org/10.2139/ssrn.2627354

Peters et al., The loss‐bet paradox: Actuaries, accountants, and other numerate people rate numerically inferior gambles as superior. Journal of Behavioral Decision Making (2018), available at https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2085.

Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

Friday
Jun222018

Fake news vs. "counterfeit social proof"--lecture summary & slides

Basic organization of talk I gave at the Lucerne conference (slides here).

I.  The public’s engagement with fake news is not credulous; it is motivated.

II.   “Fact checking” and like means of correcting false belief are unlikely to be effective and could in fact backfire.

III.  “Fake news” of the Macedonian variety is not particularly consequential: the identity-protective attitudes that motivate consumption of fake news will impel the same fact-distorting position-taking whether people are exposed to fake news or not.

IV.  What is potentially consequential are the forms of “counterfeit social proof” that the Russian government disseminated in the 2016 election.  These materials predictably trigger the identity-protective stance that makes citizens of diverse outlooks impervious to the truth.

V.  The form of information that is most likely to preempt or reverse identity-protective cognition features vivid and believeable examples of diverse groups evincing belief in action-guiding facts and forms of information.

Wednesday
Jun202018

Reflections on "System 2 bias," part 1 of 2 (I think)

Some thoughts about Miller & Sanjurjo, Part 1 of 2:

Most of the controversy stirred up by M&S centers on whether they are right about the methodological defect they detected in Gilovich, Vallone, and Tversky (1985) (GVT) and other studies of the “hot hand fallacy.”

I’m fully persuaded by M&S’s proof. That is, I get (I think!) what the problem is with GVT’s specification of the null hypothesis in this setting.

Whether in fact GVT’s conclusions about basketball shooting hold up once one corrects this defect (i.e., substitutes the appropriate null)  is something I feel less certain of, mainly because I haven’t invested as much time in understanding that part of M&S’s critique.

But what interests me even more is what the response to M&S tells us about cognition

The question, essentially, is how could so many extremely smart people (GVT & other empirical investigators; the legions of teachers who used GVT to instruct 1,000’s of students, et al.) have been so wrong for so long?! Why, too, does it remain so difficult to make those intelligent people get the problem M&S have identified?

The answer that makes the most sense to me is that the GVT and others were, ironically, betrayed by intuitions they had formed for sniffing out the general public’s intuitive mistakes about randomness.

The argument goes something like this:

I. The quality of cognitive reflection depends on well calibrated non-conscious intuitions.

There is no system 2 ex nihilo. Anything that makes it onto the screen of conscious reflection (System 2) was moments earlier residing in the realm of unconscious thought (System 1).  Whatever yanked that thought out and projected it onto the screen, moreover, was, necessarily, an unconscious mental operation of some sort, too.

It follows that reasoners who are adept at System 2 (conscious, deliberate, analytical) thinking necessarily possess well behaved System 1 (unconscious, rapid, affect-laden) intuitions. These intuitions recognize when a decisionmaking task (say, the detection of covariance) merits the contribution that System 2 thinking can make, and activates the appropriate form of conscious, effortful information processing.

In anyone lucky enough to have reliable intuitions of this sort, what trained them was, most likely, the persistent exercise of reliable and valid System 2 information processing, as brought to bear over & over in the process of learning how to be a good thinker.

In sum, System 1 and System 2 are best though of not as discrete and hierarchical modes of cognition but rather as integrated and reciprocal ones.

II.  Reflective thinkers possess intuitions calibrated to recognize and avoid the signature lapses in System 1 information processing.

The fallibility of intuition is at the core of all the cognitive miscues (the availability effect; hindsight bias; denominator neglect; the conjunction fallacy, etc.) cataloged by Kahneman and Tversky and their scholarly descendents (K&T et al.).  Indeed, good thinking, for K&T et al., consists in the use of conscious, effortful, System 2 reflection to “override” System 1 intuitions when reliance on the latter would generate mistaken inferences. 

As discussed, however, System 2 thinking cannot plausibly be viewed as operating independently of its own stable of intuitions, ones finely calibrated to recognize System 1 mistakes and to activate the sort of conscious, effortful thinking necessary to override them.

III. But like all intuitions, the ones relfective people rely on will be subject to characteristic forms of failure—ones that cause them to overestimate instances of overreliance on error-prone heuristic reasoning.

It doesn’t follow, though, that good thinkers will never be misled by their intuitions.  Like all forms of pattern recognition, the intuitions that good thinkers use will be vulnerable to recurring illusions and blind spots.

The sorts of failures in information processing that proficient thinkers experience will be predictably different from the ones that poor and mediocre thinkers must endure.  Whereas the latter’s heuristic errors expose them to one or another form of overreliance on System 1 information processing, the latter’s put them at risk of too readily perceiving that exactly that form of cognitive misadventure accounts for some pattern of public decisionmaking.

The occassions in which this form of “System 2 bias” will affect thinking are likely to be rare.  But when they occur, the intuitions that are their source will cling to individuals’ perceptions with the same dogged determination that the ones responsible for heuristic System 1 biases do.

Something like this, I believe, explains how the “ ‘hot hand fallacy’ fallacy” took such firm root. 

It’s a common, heuristic error to believe that independent events—like the outcome of two coin flips—are interdependent. Good reasoners are trained to detect this mistake and to fix it before making a judgment.

GVT spotted what they surmised was likely an instance of this mistake: the tendency of fans, players, and coaches to believe that positive performance, revealed by a short-term string of successful shots, indicated that a player was “hot.”

They tested for this mistake by comparing whether the conditional probability of a successful basketball shot following a string of successes differed significantly from a player’s unconditional probability of making a successful shot.

It didn’t. Case closed.

What didn’t occur to them, though, was that where one uses the sampling method they used—drawing from a finite series without replacement—Pr(basket|success, success, sucses) – Pr(basket) should be < 0. How much below zero it should be has to be determined analytically or (better) by computer simulation.

So if in fact Pr(basket|success, success, sucses) – Pr(basket) = 0, the player in question was on an improbable hot streak. 

Sounds wrong, doesn’t it? Those are your finely tuned intuitions talking to you; yet they’re wrong. . . .

I’ll finish off thise series “tomorrow.™”  In the meantime, read this problem & answer the three questions that pertain to it.  

Reference

Gilovich, T., Vallone, R. & Tversky, A. The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology 17, 295-314 (1985).


 

 

Monday
Jun182018

Where am I? . . . Lucerne Switzerland

Should be interesting.  Will send postcard when I get a chance.

Thursday
Jun072018

Hey, everybody--come to this cool " 'Hot hand fallacy' fallacy" workshop!

If in or can make it to New Haven next Wed:

 paper here:

Wednesday
Jun062018

Shut up & update! . . . a snippet

Also something I've been working on . . . .

1. “Evidence” vs.”truth”—the law’s position. The distinction between “evidence for” a proposition and the “truth of” it is inscribed in the legal mind through professional training and experience.

Rule 401 of the Federal Rules of Evidence defines “relevance” as “any tendency” of an item of proof “to make a fact … of consequence” to the litigation either “more or less probable” in the estimation of the factfinder. In Bayesian terms, this position is equivalent to saying that an item of proof is “relevant” (and hence presumptively admissible; see Fed. R. Evid. 402) if, in relation to competing factual allegations, the likelihood ratio associated with that evidence is either less than or greater than 1 (Lempert 1977).  

Folksy idioms—e.g., “a brick is not a wall” [Rule 401, advisory committee notes])—are used to teach prospective lawyers that this “liberal” standard of admissibility does not depend on the power of a piece of evidence to establish a particular fact by the requisite standard of proof (“more probable than not” in civil cases; “beyond a reasonable doubt” in criminal cones).

Or in Bayesian terms, we would say that a properly trained legal reasoner does not determine “relevance” (and hence admissibility) by asking whether an item of proof will on its own generate a posterior estimate either for or against the “truth” of that fact. Again, because the process of proof is cumulative, the only thing that matters is that a particular piece of evidence have a likelihood ratio different from 1 in relation to competing litigation hypotheses.

2. “I don’t believe it . . . .” This popular response, among both pre- and post-publication peer reviewers, doesn’t get the distinction between “evidence for” and the “truth of” an empirical claim.

In Bayesian terms, the reviewer who treats his or her “belief” in the study result as informative is unhelpfully substituting his or her posterior estimate for an assessment of the likelihood ratio associated with the data. Who cares what the reviewer “believes”? Disagreement about the relative strength of competing hypotheses is, after all, the occasion for data collection! If a judge or lawyer can “get” that a “brick is not a wall,” then surely a consumer of empirical research can, too: the latter should be asking whether an empirical study has “any tendency … to make a fact … of consequence” to empirical inquiry either “more or less probable” in the estimation of interested scholars (this is primarily a question of the validity of the methods used and the probative weight of the study finding).

That is, the reviewer should have his or her eyes glued to  the likelihood ratio, and not be distracted by any particular researcher’s posterior.

3.  “Extraordinary claims require extraordinary proof . . . .” No, they really don’t.

This maxim treats the strength with which a fact is held to be true as a basis for discounting the likelihood ratio associated with contrary evidence. The scholar who takes this position is saying, in effect, “Your result should see the light of day only if it is so strong that it flips scholars from a state of disbelief to one of belief, or vice versa.” 

But in empirical scholarship as in law, “A brick is not a wall.”  We can recognize the tendency of a (valid) study result to make some provisional apprehension of truth less probable than it would otherwise be while still believing—strongly, even—that the contrary hypothesis so supported is unlikely to be true.

* * *

Or to paraphrase a maxim Feynman is sometimes (mis)credited with saying, “Shut up & update!”

References

Federal Rules of Evidence (2018) & Advisory Committee Notes.

Lempert, R.O. Modeling relevance. Michigan Law Review, 75, 1021-1057 (1977).

 


Tuesday
Jun052018

Fortifying #scicomm craft norms with empirical inquiry-- a snippet

From something I'm working on . . . .

This proposal is about the merger of two sources of insight into public science communication. 

The first comprises the professional judgment of popular-science communicators who typically disseminate knowledge through documentaries and related media. The currency of decisoinmaking for these communicators consists in experience-forged hunches about the interests and behavior of target audiences.

Like those of other professionals (Margolis 1987, 1993, 1996), these intuitive judgments are by no means devoid of purchasing power. Indeed, the characteristic problem with craft-based judgment is not that it yields too little practical guidance but that it at least sometimes yields too much: where professional disagreements persist over time, it is typical for both sides to appeal to shared experience and understandings to support plausible but opposing conjectures.

The second source of insight consists of empirical studies aimed at dissolving this constraint on professional judgment. The new “science of science communication” proposes that science’s own distinctive methods of disciplined observation and causal inference be made a part of the practice of professional science communication (Jaimieson, Kahan & Scheufele 2017). Such methods can, in particular, be used to generate evidence for evaluating the conflicting positions that figure in persistent professional disagreements.

What is persistently holding this research program back, however, is its principal location: the social science lab. 

Lab studies (including both observational studies and experiments) aspire to silence the cacophony of real-world influences that confound inference on how particular psychological mechanisms fortify barriers to public science comprehension.

But precisely because they test such hypotheses in experimentally pristine conditions, lab studies don’t on their own tell professional science communicators what to do.  Additional empirical research is necessary—in the field—to adjudicate between competing conjectures about how results observed in the lab can be reproduced in the real world (Kahan and Carpenter 2017; Kahan 2014).

The need for practitioner-scholar collaborations in such a process was one of the central messages of the recent National Academies of Science (2017) report  on the science of science communication.  “Through partnerships entailing sustained interaction with members of the . . . practitioner community, researchers come to understand local needs and circumstances, while . . . practitioners gain a better understanding of the process of research and their role in it” (ibid. p. 42). The current proposal responds to the NAS’s important prescription.

References

 Kahan, D.M. Making Climate-Science Communication Evidence-Based—All the Way Down. in Culture, Politics and Climate Change (ed. M. Boykoff & D. Crow) 203-220 (Routledge Press, New York, 2014).

 Kahan, D.M. & Carpenter, K. Out of the lab and into the field. Nature Climate Change 7, 309-10 (2017).

Jamieson, K.H., Kahan, D.M. & Scheufele, D.A. The Oxford Handbook of the Science of Science Communication (Oxford University Press, 2017).

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, 1996).

Margolis, H. Paradigms and Barriers (University of Chicago Press, Chicago, 1993).

Margolis, H. Patterns, Thinking, and Cognition ((University of Chicago Press, Chicago, 1987).

Monday
Jun042018

Still here . . .

 

Thursday
May032018

Guest post: early interest in science predicts long-term trust of scientitsts

Once again, we bring you the cutting edge of #scicomm science from someone who can actually do it! Our competitors can only watch in envy.

The Enduring Effects of Scientific Interest on Trust in Climate
Scientists in the U.S.

Matt Motta (@matt_motta)

Image result for matt motta minnesotaAmericans’ attitudes toward scientists are generally positive. While trust in the scientific community has been on the decline in recent years on the ideological right, Americans are usually willing to defer to scientific expertise on a wide range of issues.

Americans’ attitudes toward climate scientists, however, are a notable exception. Climate scientists are amongst the least trusted scientific authorities in the U.S., in part due to low levels of support from Republicans and Independents.

A recent Pew study found that less than a third (32%) of Americans believe that climate scientists’ research is based on the “best available evidence,” most of the time. Similar numbers believe that climate scientists are mostly influenced by their political leanings (27%) and the desire to advance their careers (36%).

Why do (some) Americans distrust climate scientists? This is an important question, because (as I have shown in previous research) negativity toward scientists is associated with the rejection of scientific consensus on issues like climate change. It is also associated with support for political candidates (like George Wallace and Donald Trump) that are skeptical of the role experts play in the policymaking process.

Figuring out why Americans distrust climate scientists may be useful for devising new strategies to rekindle that trust. Previous research has done an excellent job documenting the effects of political ideology on trust in climate scientists. Few, however, have considered the effect of Americans’ interest in science and knowledge of basic scientific principles – both of which have been linked to positivity toward science and scientists.

In a study recently published at Nature Climate Change, I demonstrate that interest in scientific topics at young ages (12-14)  is associated with increased trust in climate scientists decades later in adulthood, across the ideological spectrum. 

In contrast, I find little evidence that young adults’ levels of science comprehension (i.e., science knowledge and quantitative skills) increase trust later in life. To the extent that they do, the effects of science knowledge and quantitative ability tend to be strongly conditioned by ideology.

In addition to considering the effects of science interest and comprehension on trust in climate scientists, my work offers two additional points of departure from previous research. First, few have investigated these potential determinants of attitudes toward climate scientists in young adulthood. This is surprising, because previous research has found that this is a critical stage in the development of attitudes toward science.

Second, fewer still have studied how these factors might interact with political ideology to shape opinion toward climate scientists. As readers of this blog might expect, Americans who are highly interested in science should exhibit higher levels of trust across the ideological divide. This is consistent with research suggesting that science curiosity encourages open-minded engagement with scientific issues – thereby increasing acceptance of science and scientific consensus.

In contrast, science comprehension should polarize opinions about climate scientists along ideological lines. If science knowledge and quantitative skills increase trust in climate scientists, we might expect this effect to be greater for liberals – who tend to be more accepting of climate science than conservatives. Again familiar to readers of this blog, this point is consistent with research showing that people who “think like scientists” tend to use their skills to reinforce existing social, political, and cultural group allegiances.

Using panel data from the Longitudinal Study of American Youth (LSAY) I model American adults’ trust in climate scientists (in 2011) as a function of their science interest and comprehension measured at ages 12-14 (in 1987). I structure these models hierarchically because respondents were cluster sampled at the school level, and control for several potentially-relevant demographic factors (e.g., race, sex). For a more-technical discussion of how I do this, please consult the study’s methods section (just after the discussion).

I measure Americans’ trust in scientists using self-reported measures of trust in information from four different different groups; science professors, state environmental departments, NASA/NOAA, and the Intergovernmental Panel on Climate Change (IPCC). I also look at a combined index of all four.

I then measure science interest using a self-reported measure of respondents’ self-reported interest in “science issues.” I also operationalize science comprehension using respondents’ scores on standardized science knowledge and quantitative ability tests.

The results suggest that self-reported science interest at young ages is associated with trust in climate scientists about two decades later (see the figure below). On average, science interest in young adulthood is associated with about a 6% increase in trust in climate scientists. Young adults’ science knowledge and quantitative skills, on the other hand, bear little association with trust in climate scientists measured years later. 

The effects of science interest in young adulthood hold when factoring levels of science interest measured in adulthood into the model. I find that science interest measured in young adulthood earlier explains more than a third (36%) of the variable’s cumulative effect on trust in climate scientists.

Critically, and perhaps of most interest to readers of this blog, I find that the effects of interest are not conditioned by political ideology. Interacting science interest with political ideology, I find that young adults who are highly interested in science are more trusting of climate scientists – irrespective of their ideological allegiances.

In contrast, the effect of science comprehension in young adulthood on trust in climate scientists is significantly stronger for ideological liberals. This was true in nearly every case, for both science knowledge and quantitative skills. The lone exception is that the interaction between quantitative skills and ideology fell just short of one-tailed significance in the NASA/NOAA model (p = 0.13), and two-tailed significance in the IPCC model (p = 0.06).

As I discuss in the paper, these results suggest an exciting path forward for rekindling public trust in climate scientists. Efforts to boost scientific interest in young adulthood may have lasting effects on trust, decades later.

What these efforts might look like, of course, is an open question. Board and video games aimed at engaging young audiences could potentially be effective. A key challenge, however, will be to figure out how to use these tools to engage young adult audiences that are not already highly interested in scientific topics. 

I also think that this research underscores the usefulness of longitudinal approaches to studying Americans’ attitudes toward science. Future research should investigate whether or not these dynamics hold for Millennials and Generation Z (who tend to be more accepting of scientific consensus on climate change than older generations) is an interesting question, and one future longitudinal research should attempt to answer. 

 

Sunday
Apr292018

Weekend update: Precis for "are smart people ruining democracy? What about curious ones?"