follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Saturday
Dec152018

Weekend update: Tedx restored to youtube

Apparently the original posting of this (staggeringly brief) talk suffered from imperfect audio (I never listened to it, so I can't say first-hand whether it was bad)  So here is the new & improved version (which unforatunately does not share the URL of the old, widely circulated & downloaded one).

 

 

Tuesday
Dec042018

More *doing* science communication on science of science communication

 

Read the essay here.


Monday
Dec032018

TEDx in Vienna

Speaks for itself, I guess.

Thursday
Nov292018

It's that time again: Science of Science Communication course in Spring Term

This is what's on deck for spring semester:

PSYC 601b, The Science of Science Communication, Dan Kahan

The simple dissemination of valid scientific knowledge does not guarantee it will be recognized by non-experts to whom it is of consequence. The science of science communication is an emerging, multidisciplinary field that investigates the processes that enable ordinary citizens to form beliefs consistent with the best available scientific evidence, the conditions that impede the formation of such beliefs, and the strategies that can be employed to avoid or ameliorate such conditions. This seminar surveys, and makes a modest attempt to systematize, the growing body of work in this area. Special attention is paid to identifying the distinctive communication dynamics of the diverse contexts in which non-experts engage scientific information, including electoral politics, governmental policy making, and personal health decision making.

This is from the more in depth description of the course that accompanies course materials:

The most effective way to communicate the nature of this course is to identify its motivation. We live in a place and at a time in which we have ready access to information—scientific information—of unprecedented value to our individual and collective welfare. But the proportion of this information that is effectively used—by individuals and by society—is shockingly small. The evidence for this conclusion is reflected in the manifestly awful decisions people make, and outcomes they suffer as a result, in their personal health and financial planning. It is reflected too not only in the failure of governmental institutions to utilize the best available scientific evidence that bears on the safety, security, and prosperity of its members, but in the inability of citizens and their representatives even to agree on what that evidence is or what it signifies for the policy tradeoffs acting on it necessarily entails.

This course is about remedying this state of affairs. Its premise is that the effective transmission of consequential scientific knowledge to deliberating individuals and groups is itself a matter that admits of, and indeed demands, scientific study. The use of empirical methods is necessary to generate an understanding of the social and psychological dynamics that govern how people (members of the public, but experts too) come to know what is known to science. Such methods are also necessary to comprehend the social and political dynamics that determine whether the best evidence we have on how to communicate science becomes integrated into how we do science and how we make decisions, individual and collective, that are or should be informed by science.

Likely you get this already: but this course is not simply about how scientists can avoid speaking in jargony language when addressing the public or how journalists can communicate technical matters in comprehensible ways without mangling the facts. Those are only two of many science communication” problems, and as important as they are, they are likely not the ones in most urgent need of study (I myself think science journalists have their craft well in hand, but we’ll get to this in time). Indeed, in addition to dispelling (assaulting) the fallacy that science communication is not a matter that requires its own science, this course will self-consciously attack the notion that the sort of scientific insight necessary to guide science communication is unitary, or uniform across contexts—as if the same techniques that might help a modestly numerate individual understand the probabilistic elements of a decision to undergo a risky medical procedure were exactly the same ones needed to dispel polarization over climate science! We will try to individuate the separate domains in which a science of science communication is needed, and take stock of what is known, and what isn’t but needs to be, in each.

The primary aim of the course comprises these matters; a secondary aim is to acquire a facility with the empirical methods on which the science of science communication depends. You will not have to do empirical analyses of any particular sort in this class. But you will have to make sense of many kinds. No matter what your primary area of study is—even if it is one that doesn’t involve empirical methods—you can do this. If you don’t yet understand that, then perhaps that is the most important thing you will learn in the course. Accordingly, while we will not approach study of empirical methods in a methodical way, we will always engage critically the sorts of methods that are being used in the studies we examine, and I from time to time will supplement readings with more general ones relating to methods. Mainly, though, I will try to enable you to see (by seeing yourself and others doing it) that apprehending the significance of empirical work depends on recognizing when and how inferences can be drawn from observation: if you know that, you can learn whatever more is necessary to appreciate how particular empirical methods contribute to insight; if you don’t know that, nothing you understand about methods will furnish you with reliable guidance (just watch how much foolishness empirical methods separated from reflective, grounded inference can involve).

If so moved, you can find materials from previous years' versions of this seminar here.

 

Wednesday
Nov212018

An adventure in science communication: frequentist vs. Bayes hypothesis testing

A smart person asked me to explain to her the basic difference between frequentist and Bayesian statistical methods for hypothesis testing.  Grabbing the nearest envelope, I jotted these two diagrams on the back of it:

 

Frequentist: reject the null, p < 0.05; Bayesian: H1 (2.0) is 5x more consistent with the observed effect (1.5) than is H0 (0.0).

Displayed on the left, a frequentist analysis assesses the probability of observing an effect as big as or bigger than the experimental one relative to a hypothesized “null effect.” The “null hypothesis” is represented by a simple point estimate of 0, and the observed effect by the mean in a normal (or other appropriate) distribution.

In contrast, a Bayesian analysis (on right) tests the relative consistency of the observed effect with two or more hypotheses.  Those hypotheses, not the observed effect, are conceptualized as ranges of values arrayed in relation to their probability in distributions that account for measurement error and any other sort of uncertainty a researher might have.  The relative probability of the observed effect with each hypothesis can then be determined by examining where that outcome would fall on the hypotheses’ respective probability distributions.

I left out why I like the latter better. I was after as neutral & accessible an explanation as possible.

Did I succeed? Can you do better?

Wednesday
Nov142018

Science curiosity research program

From something I'm working on. More anon. . . .

The Science Curiosity Research Program

We propose a program for the study of science curiosity as a civic virtue in a polarized society.  

1. It has been assumed (very reasonably) for many years that enlightened self-government demands a science-literate citizenry. Perversely, however, recent research has shown that all manner of reasoning proficiency—from cognitive reflection to numeracy, from actively open-minded thinking to science literacy—magnifies political polarization on policy-relevant science.

2. The one science-comprehension-related disposition that defies this pattern is science curiosity. In our research, we define science curiosity as the motivation to seek out and consume scientific information for personal pleasure. The Cultural Cognition Project Science Curiosity Scale (“SCS”) enables the precise measurement of this disposition in members of the general public.

Developed originally to promote the study of public engagement with science documentaries, SCS also has also been shown to mitigate politically motivated reasoning. Politically motivated reasoning consists in the disposition to credit or dismiss scientific evidence in patterns that reflect and reinforce individuals’ membership in identity-defining groups.  It is the psychological mechanism that underwrites persistent political controversy over climate change, handgun ownership, the HPV vaccine, nuclear waste disposal, and a host of other controversial issues.

Individuals who score high on SCS, however, display a remarkable degree of resistance to this dynamic.  Not only are they less polarized than other citizens with comparable political predispositions. They also are demonstrably more willing to search out and consume scientific evidence that runs contrary to their political predispositions.

The reason why is relatively straightforward.  Politically motivated reasoning generates a dismissive, identity-protective state of mind when individuals are confronted with scientific evidence that appears to undermine beliefs associated with their group identities.  In contrast, when one is curious, one has an appetite to learn something surprising and unanticipated—a state of mind diametrically opposed to the identity-protective impulses that make up politically motivated reasoning.

These features make science curiosity a primary virtue of democratic citizenship. To the extent that it can be cultivated and deployed for science communication, science curiosity has the power to quiet the impulses that deform human reason and that divert dispositions of scientific reasoning generally from their normal function of helping democratic citizens to recognize the valid policy-relevant science.

3.  Perfecting the techniques for cultivating and deploying science curiosity is the central aim of our proposed research program.  Certain of the projects we envision aim to instill greater science curiosity in primary and secondary school students as well as adults.  But still others seek to harness and leverage the science curiosity that already exists in democratic citizens.  Specifically, we propose to use SCS to identify the sorts of communications that arouse curiosity not only in the individuals who already display the most of this important disposition but also in those who don’t—so that when they are furnished evidence that challenges their existing beliefs, they will react not with defensive resistance but with the open-minded desire to know what science knows.

Monday
Nov122018

Guest post: Some weird things in measuring belief in human-caused climate change

From an honest-to-god real expert--a guest post by Matt Motta, a post doctoral fellow associated with the Cultural Cognition Project and Annenberg Public Policy Center. Matt discusses his recent paper, An Experimental Examination of Measurement Disparities in Public Climate Change Beliefs.

 Do Americans Really Believe in Human-Caused Climate Change? 

Matt Motta (@matt_motta)

Image result for matt motta minnesotaDo most Americans believe that climate change is caused by human activities? And what should we make of recent reports (e.g., Van Boven & Sherman 2018) suggesting that self-identified Republicans largely believe in climate change?

Surprisingly, given the impressive amount of public opinion research focused on assessing public attitudes about climate change (see: Capstick et al., 2014 for an excellent review), the number of Americans (and especially Republicans) who believe that climate change is human caused actually a source of popular and academic disagreement.

For example, scholars at the Pew Research Center have found that less than half of all Americans, and less than a quarter of Republicans, believe that climate change is caused by human activity (Funk & Kennedy 2016). In contrast, a team of academic researchers recently penned an op-ed in the New York Times (Van Boven & Sherman 2018; based on Van Boven, Ehret, & Sherman 2018) suggesting that most Americans, and even most Republicans, believe in climate change – including the possibility that it is human caused.

In a working paper, my coauthors (Daniel Chapman, Dominik Stecula, Kathryn Haglin and Dan Kahan) and I offer a novel framework for making sense of why researchers disagree about the number of Americans (and especially Republicans) who believe in human caused climate change. We argue that commonplace and seemingly minor decisions scholars make when asking the public questions about anthropogenic climate change can have a major impact on the proportion of the public who appears to believe in it.

Specifically, we focus on three common methodological choices researchers must make when asking these questions. First, scholars must decide whether they want to offer “discrete choice” or Likert style response options. Discrete choice responses force respondents to choose between alternative stances; e.g., whether climate change is human caused, or caused by natural factors. Likert-style response formats instead ask respondents to assess their levels of agreement or disagreement with a particular argument; e.g., whether one agrees or disagrees that climate change is human caused.

Likert-style response can be subject to “acquiescence bias,” which occurs when respondents simply agree with statements, potentially to avoid thinking carefully about the question being asked. Discrete choice response formats can reduce acquiescence bias, but allow for less granularity in expressing opinions about an issue. Whereas the Pew Study mentioned earlier made use of discrete style response options, the aforementioned op-ed made use of Likert style responses (and found comparatively higher levels of belief in anthropogenic climate change).

Second, researchers must choose whether or not to offer a hard or soft “don’t know” (DK) response option. Hard DK options expressly give respondents the opportunity to report that they do not know how they feel about a certain question. Soft DK responses, on the other hand, allow respondents to skip a question, but do not expressly advertise their ability to not answer it.

Hard DKs have the benefit of giving those who truly have no opinion about a particular prompt to say so; rather than either guess randomly, or – especially when Likert style questions – simply agree with the prompt. However, expressly offering a DK option risks that respondents will simply indicate that they “don’t know” rather than engage more effortfully with the survey. Again drawing on the two examples described earlier, the comparatively pessimistic Pew study offered respondents a hard DK, whereas the work summarized in the New York Times op-ed did not.

Third, researchers have the ability to offer text that provides basic background information about complex concepts; including (potentially) anthropogenic climate change. This approach has the benefit of making sure that respondents have a common level of understanding about an issue, before answering questions about it. However, scholars must choose the words provided in these short “explainers” very carefully – as information presented there may influence how respondents interpret the question.

For example, the research summarized in the New York Times op-ed described climate change as being caused by “increasing concentrations of greenhouse gasses.” Although this text does not attribute greenhouse gas emissions to any particular human source, it is important to keep in mind that skeptics may see climate change as the result of factors having nothing to do with gas emissions (e.g., that the sun itself is responsible for increased temperatures). Consequently, this text could lead respondents toward providing an answer that better matches scientific consensus on anthropogenic climate change.

We test the impact of these three decisions on the measurement of anthropogenic climate change attitudes, in a large demographically online survey of American adults (N = 7,019). Respondents were randomly assigned to answer one of eight questions about their belief in anthropogenic climate change; each varying one of the methodological decisions described above, and holding all other factors constant.

The results are summarized in the figure below. Hollow circles are number of respondents in each condition who purport to believe in human-caused climate change, with 95% confidence intervals extending outward from each one. The left-hand pane plots these quantities for the full sample, and the right-hand pane does the same for just self-identified Republicans. The elements varied in each experimental condition are listed in the text just below the figure.

Generally, the results suggest that minor differences in how we ask questions about anthropogenic climate change can increase the number of Americans (especially Republicans) who appear to believe in it.  For example, Likert style response options (conditions 5–8) always produce higher estimates of the number of Americans and Republicans than discrete choice style questions (conditions 1–4).

At times, these differences are quite dramatic. For example, Condition 1 mimics the way Pew (i.e., Funk & Kennedy 2016) ask questions about anthropogenic climate change;  using discrete-choice questions that offer a hard DK option with no “explainer text.” This method  suggests that 50% of Americans, and just 29% of Republicans, believe that climate change is caused by human activities.

Condition 8, on the other hand, mimics method used in the piece reported in the aforementioned op-ed; featuring Likert-style response options, text that explains that climate change is caused by the greenhouse effect, and no explicit DK option. In sharp contrast, this method finds that 71% of Americans and 61% of Republicans believe that climate change is human caused. This means that the methods used in Condition 8 more than double the number of Republicans who appear to believe in human caused climate change.

We think that these results offer readers a useful framework for making sense  public opinion about anthropogenic climate change. Our research urges readers to pay careful attention to the way in which public opinion researchers ask questions about anthropogenic climate change, and to consider how those decisions might increase (or decrease) the number of Americans who appear to believe in anthropogenic climate change. Of course, we do not propose a single measurement strategy as a “gold standard” for assessing opinion about anthropogenic climate change. Instead, we hope that these results can readers be better consumers of public opinion about climate change.

References

Capstick, S., Whitmarsh, L., Poortinga, W., Pidgeon, N., & Upham, P. International trends in public perceptions of climate change over the past quarter century. Wiley Interdisciplinary Reviews: Climate Change , 6(1), 35-61. (2015).

Ehret, P. J., Van Boven, L., & Sherman, D. K. (2018). Partisan Barriers to Bipartisanship: Understanding Climate Policy Polarization. Social Psychological and Personality Science, 1948550618758709.

Funk, C., & Kennedy, B. The politics of climate. Pew Research Center. Retrieved from: http://www.pewinternet.org/2016/10/04/the-politics-of-climate/ (2016, Oct 4)

Van Boven, L. & Sherman D. Actually, Republicans Do Believe in Climate Change. New York Times  (2018, July 28)

Van Boven, L., Ehret, P. J., & Sherman, D. K. Psychological barriers to bipartisan public support for climate policy. Perspectives on Psychological Science , 13(4), 492-507. (2018).

 

Thursday
Nov082018

Science literacy, science curiosity, and education

A science-curious commenter asked me what the relationship was between educational attainment and scores on the Ordinary Science Intelligence assessment (OSI) and on the Science Curiosity Scale (SCS), respectively.

I tried to entice him or her to make a prediction, so that we could have a proper WSMD? JA!, but he or she then fell silent.  I had the data ready to report, though, and figured they were interesting enough to share with the site's 12.3 billion readers (yes, we’re down 1.7 billion; suspiciously, subscriptions to the Gelman blog have increased by that amount).

Matching the pattern observed in relation to other demographic characteristics, the science-curiosity gap between individuals of relatively low and relatively high education levels is quite modest in comparison to the gap between these respective groups' OSI scores. (Consider, too, how much more informative, in a practical sense, the overlapping PDDs are compared to the regression-line plots.)

More evidence, then, that the social and economic conditions that generate inequality in science comprehension pose a much smaller barrier to being the sort of person who is awed by the insights of scientific inquiry. 

I think that’s pretty cool.

Wednesday
Nov072018

Some (very compact) reflections on the science communication environment; on the pollution of it; and on the need for self-conscious, evidence-informed protection of it

My answer to two questions--what sorts of emerging technologies need science communication attention, and what form-- in preparation for an upcoming roundtable discussion.

            0.  The “science communication environment” (SCE) comprises the sum total of institutions, processes, and norms that connect public decisionmaking with the best available scientific evidence. Conditions that disrupt these influences can be viewed as forms of SCE pollution.  One particularly toxic form of such pollution consists in social meanings that fuse positions on science-informed issues with citizens’ cultural identities.  This dynamic is at the root of polarization over climate change,  nuclear power, and other issues (Jamieson, Kahan & Scheufele 2017).

            1.  The science of science communication supplies methods for predicting which new forms of decision-relevant science are vulnerable to this pathology (Kahan 2015). Genome editing, geoengineering, and AI all merit investigation because of their affinity with existing technologies that generate polarization.

            2. The U.S. is hobbled by the absence of any agency charged with protecting SCE. The resulting void leaves the fate of new forms of decision-relevant science vulnerable to chance and strategic behavior. The consequences of such neglect are illustrated by the career of the HPV vaccine (Kahan 2013). Just as OMB now screens all administrative actions for costs and benefits, some agency could evaluate the SCE impact of such actions.

References

Jamieson, K.H., Kahan, D.M. & Scheufele, D.A. eds.  Oxford Handbook of the Science of Science Communication (Oxford Univ. Press, New York, 2017).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm., 14, 1-12 (2015).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Tuesday
Nov062018

new working paper: How to inflate/deflate measurement of Republican belief in human-caused climate change

This paper was written more or less in response to van Boven, Ehret & Sherman (2018), who  (in an academic paper and in a companion New York Times op-ed) reported the finding that "most Republicans believe in climate change," including human-caused climate change. For me, the bottom line is that scholars should be careful not to mistake survey artifacts for shifts in public opinion.

Reference

Van Boven, L., Ehret, P.J. & Sherman, D.K. Psychological Barriers to Bipartisan Public Support for Climate Policy. Perspectives on Psychological Science 13, 492-507 (2018).

Sunday
Nov042018

Weekend update--Cultural cognition dictionary/glossary whatever

If you are interested in some sight-site-seeing, check out the CC dictionary/glossary or whatever. It now has over three dozen entries.

 

Thursday
Nov012018

Science literacy vs. Science Curiosity

The social, cultural, and economic influences that generate inequalities in science comprehension have considerably less impact on science curiosity. 

That's how I interpret these data:

Science curiosity is a robust, democratic sensibility.

(Science comprehension is measured here by the Ordinary Science Intelligence scale, and science curiosity by the Science Curiosity Scale.)

Thursday
Oct252018

Who "falls for" fake news? Apparently no one.

A few people have asked me what I think of Pennycook, G. & Rand, D.G, “Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning,” Cognition  (2018), https://doi.org/10.1016/j.cognition.2018.06.011.

In general, I like it.  The topic is important, and the claims and analyses interesting.

But here are a couple of problems.

First, the sample is not valid. 

The study was administered to M Turk workers, who for well-known reasons are not suitable for studies of the interaction of political identity and information processing (e.g., Krupnikov & Levine 2014).

But the real problem with the sample is something even more fundamental: the subjects in the study do not represent the individuals whose behavior the paper purports to be modeling.

Exposure to “fake news” is not something that occurred with equal probability to everyone in the general population, or even to everyone on Facebook or Twitter. Indeed, it was concentrated in a relatively small group of highly conservative individuals (Guess, Nyhan & Reifler 2018).

If one wants to draw inferences, then, about “who falls for fake news” in the real world, one needs to sample from that segment of the population. Its members necessarily share some distinctive disposition to consume an unusual form of political communication (or miscommunication).  It is conceivable that motivated reasoning figures in the propensity of this class’s members to “fall for” fake news even if it doesn’t in a convenience sample whose members have been recruited without regard for this distinctive disposition.

Second, the data do not support the key inference that P&R draw.

P&R conclude that people who score low on a variant of the Cognitive Reflection Test were more likely to “fall for” fake news.  But in fact, their own evidence shows that no one was falling for fake news:

 

As this Figure demonstrates, the difference between subjects scoring low on CRT ("intuitive") and those scoring high ("deliberative") related only to the reported intensity with which subjects of those types rated fake news as lacking accuracy.

Underscores the lesson that a “significant” correlation can be insufficient to justify an inference from the data where the variance explained occurs over a range inconsistent with the study hypothesis (Dixon & Jones 2015).

Refs

Dixon, R.M. & Jones, J.A. Conspiracist Ideation as a Predictor of Climate-Science Rejection: An Alternative Analysis. Psychol Sci 26, 664-666 (2015).

Guess, A., Nyhan B. & Reifler J. Selective Exposure to Misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign. Working paper (2018), at http://www.dartmouth.edu/~nyhan/fake-news-2016.pdf.

Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014).

 

Wednesday
Oct172018

Where am I (or will be soon)? Vienna . . . 

Will be talking (for 15 mins; that's less than a full sentence for me) about the uplifiting spectacle of motivated numeracy.

Tickets are sold out, but it should still be possible to watch a live-stream broadcast from the comfort of one's home.

Tuesday
Oct092018

Kathleen Hall Jamieson double feature! Plus Carl Zimmer & Sarah Smaga

Kathleen Hall Jamieson, the director of the Annenberg Public Policy Center at the Unviersity of Pennsylvania, will be doing two events here at Yale tomorrow. The first will be a public lecture on how Russian interference affected the result of the 2016 presidential election, the subject of her latest book; the second a panel discussion, with Carl Zimmer, science writer for the New York Times, and Sarah Smaga, a Ph.D. candidate in  Molecular Biophysics and Biochemistry and former president of the Yale Science Diplomats.

If you are in town & or can get here from where you are, definitely attend! 

Event # 1

 

Event # 2

 

Monday
Oct082018

Piling on: still more studies rejecting "gateway belief" model

 

Thursday
Sep202018

Help wanted--to identify cognitive bias at work in peoples' preferences for where plastic should be extracted from the ocean

There’s an interesting puzzle being debated over on the blog of  former Freud expert & current stats legend Andrew Gelman.  The question (posed by guest blogger Phil) is why people who are concerned about plastic deposits in the ocean seem to prefer removal schemes that operate remote from the source, notwithstanding the greater efficiency of source-based removal. 

Presumably one cognitive bias or another is at work—but what exactly is the nature of this mental miscue?

It struck me that the 13 billion readers of this blog would be well situated to help answer this question.

So have at it.

But note this one proviso: in addition to identifying the responsible bias and explaining how it works, suggest (in broad outline form) an empirical test one could perform to verify the posited account.

The problem of fish choking on plastic in the ocean is bad enough. We don’t need to make things worse by drowning reflective people in a sea of just-so stories.

Tuesday
Sep182018

Civic-epistemic virtues in the Risk Regulation Republic

From a recent lecture, this one at Texas Tech, in Lubbock Texas (Slides here):

My goal is to present evidence on the mental dispositions necessary for enlightened self-government in a risk-regulation republic

By a “risk regulation republic,” I mean a regime that is charged with using the best scientific evidence at its disposal to protect its citizens from all manner of hazards—from environmental ones, like climate change; to public health ones, like infection by the Zika virus; to social ones, like crime victimization or financial poverty.

Because the risk-regulation republic is democratic, its success in attaining these ends will depend in part on its citizens’ capacity to recognize such evidence. What kinds of mental dispositions—call them the civic epistemic virtues—does that capacity require?

For over two decades, the answer has been assumed to be one or another form of civic science literacy. As a theoretical construct, “civic science literacy” consists in knowledge of certain foundational scientific findings (e.g., human beings evolved from other species of animals; the Earth revolves around the Sun rather than vice versa), along with a set of critical reasoning skills that enable citizens’ to enlarge their stock of scientific knowledge and to bring it to bear on risk-regulation and other policy issues.

This position, I’ll argue, is incomplete.  Indeed, it is dangerously incomplete: for unless civic science literacy is accompanied by another science-reasoning disposition, the widespread attainment of  knowledge and reasoning skills that civic science literacy comprises can actually impede public engagement with the best available evidence—and deepen predictable, baleful forms of cultural polarization over what science knows. 

The additional disposition that's needed to orient civic science literacy is science curiosity.

The position that enlightened self-government requires science curiosity is definitely not new. Dewey saw science curiosity as an indispensable civic-epistemic virtue.  He was right, although not merely because curiosity motivates knowledge acquisition and activates information processing essential to its use—Dewey’s central points. 

What makes science curiosity an civic-epistemic virtue in the risk regulation republic is the role this disposition can play in quieting the defensive, identity-protective forms of cognition that turn science comprehension into a barrier rather than an entryway to public recognition of the best available evidence on societal risks.

Monday
Sep172018

Some reflections/admonitions on graphic reporting of data

A recent instructional lecture delivered at the Annenberg Public Policy Center. Slides here.

Boo!:


Yay!:

 ooooo!

ahhhhhh!


Thursday
Sep062018

Return of the chick sexers . . .

A repeat, but one that warrants repeating at this time of year . . . .


Okay, here’s a set of reflections that seem topical as another school year begins.

The reflections can be structured with reference to a question:

What’s the difference between a lawyer and a chick sexer?

It’s not easy, at first, to figure out what they have in common.  But once one does, the risk that one won’t see what distinguishes them is much bigger, in actuarial and consequential terms.

I tell people about the link between them all the time—and they chuckle.  But in fact, I spend hours and hours and hours per semester eviscerating comprehension of the critical distinction between them in people who are filled with immense intelligence and ambition, and who are destined to occupy positions of authority in our society.

That fucking scares me.

Anyway, the chick sexer is the honey badger of cognitive psychology: relentlessly fascinating, and adorable. But because cognitive psychology doesn’t have nearly as big a presence on Youtube as do amusing voice-overs of National Geographic wildlife videos, the chick sexer is a lot less famous. 

So likely you haven’t heard of him or her.

But in fact the chick sexer plays a vital role in the poultry industry. It’s his or her responsibility to separate the baby chicks, moments after birth, on the basis of gender.

The females are more valuable, at least from the point of view of the industry. They lay eggs.  They are also plumper and juicier, if one wants to eat them. Moreover, the stringy scrawny males, in addition to being not good for much, are ill-tempered & peck at the females, steal their food, & otherwise torment them.

So the poultry industry basically just gets rid of the males (or the vast majority of them; a few are kept on and lead a privileged existence) at soonest opportunity—minutes after birth.

The little newborn hatchlings come flying (not literally; chickens can’t fly at any age) down a roomful of conveyor belts, 100’s per minute. Each belt is manned (personed) by a chick sexer, who deftly plucks (as in grabs; no feathers at this point) each chick off the belt, quickly turns him/her over, and in a split second determines the creature’s gender, tossing the males over his or her shoulder into a “disposal bin” and gently setting the females back down to proceed on their way.

They do this unerringly—or almost unerringly (99.99% accuracy or whatever).

Which is astonishing. Because there’s no discernable difference, or at least one that anyone can confidently articulate, in the relevant anatomical portions of the minutes-old chicks.

You can ask the chick sexer how he or she can tell the difference.  Many will tell you some story about how a bead of sweat forms involuntarily on the male chick beak, or how he tries to distract you by asking for the time of day or for a cigarette, or how the female will hold one’s gaze for a moment longer or whatever. 

This is all bull/chickenshit. Or technically speaking, “confabulation.”

Indeed, the more self-aware and honest members of the profession just shrug their shoulders when asked what it is that they are looking for when they turn the newborn chicks upside down & splay their little legs.

But while we don’t know what exactly chicksexers are seeing, we do know how they come to possess their proficiency in distinguishing male from female chicks: by being trained by a chick-sexing grandmaster.

For hours a day, for weeks on end, the grandmaster drills the aspiring chick sexers with slides—“male,” “female,” “male,” “male,” “female,” “male,” “female,” “female”—until they finally acquire the same power of discernment as the grandmaster, who likewise is unable to give a genuine account of what that skill consists in.

This is a true story (essentially).

But the perceptive feat that the chick sexer is performing isn’t particularly exotic.  In fact, it is ubiquitous.

What the chick sexer does to discern the gender of chicks is an instance of pattern recognition.

Pattern recognition is a cognitive operation in which we classify a phenomenon by rapidly appraising it in comparison to large stock of prototypes acquired by experience.

The classification isn’t made via conscious deduction from a set of necessary and sufficient conditions but rather tacitly, via a form of perception that is calibrated to detect whether the object possesses a sufficient number of the prototypical attributes—as determined by a gestalt, “critical mass” intuition—to count as an instance of it.

All manner of social competence—from recognizing faces to reading others emotions—depend on pattern recognition.

But so do many do specialized ones. What distinguishes a chess grandmaster from a modestly skilled amature player isn’t her capacity to conjure and evaluate a longer sequence of potential moves but rather her ability to recognize favorable board positions based on their affinity to a large stock of ones she has determined by experience to be advantageous.

Professional judgment, too, depends on pattern recognition.

For sure, being a good physician requires the capacity and willingness to engage in conscious and unbiased weighing of evidence diagnostic of medical conditions. But that’s not sufficient; unless the doctor includes only genuinely plausible illnesses in her set of maladies worthy of such investigation, the likelihood that she will either fail to test for the correct, one fail to identify it soon enough to intervene effective, will be too low.

Expert forensic auditors must master more than the technical details of accounting; they must acquire a properly calibrated capacity to recognize the pattern of financial irregularity that helps them to extract evidence of the same from mountains of business records.

The sort of professional judgment one needs to be a competent lawyer depends on a properly calibrated capacity for pattern recognition, too.

Indeed, this was the key insight of Karl Llewellyn.  The most brilliant member of the Legal Realist school, Llewellyn observed that legal reasoning couldn’t plausibly be reduced to deductive application of legal doctrines. Only rarely were outcomes uniquely determined by the relevant set of formal legal materials (statutes, precedents, legal maxims, and the like).

Nevertheless, judges and lawyers, he noted, rarely disagree on how particular cases should be resolved. How this could be fascinated him!

The solution he proposed was professional “situation sense”: a perceptive faculty, acquired by education and experience, that enabled lawyers to reliably appraise specific cases with reference to a stock of prototypical “situation types,” the proper resolution of which that was governed by shared apprehensions of “correctness” instilled by the same means.

This feature of Llewellyn’s thought—the central feature of it—is weirdly overlooked by many scholars who characterize themselves as “realists” or New Realists,” and who think that Llewellyn’s point was that because there’s no “determinacy” in “law,” judges must be deciding on the basis of “political” sensibilities of the conventional “left-right” sort, generating differences in outcome across judges of varying ideologies. 

It’s really hard to get Llewellyn more wrong than that!

Again, his project was to identify how there could be pervasive agreement among lawyers and judges on what the law is despite its logical indeterminacy. His answer was that members of the legal profession, despite heterogeneity in their “ideologies” politically understood, shared a form of professionalized perception—“situation sense”—that by and large generated convergence on appropriate outcomes the coherence of which would befuddle non-lawyers.

Llewellyn denied, too, that the content of situation sense admitted of full specification or articulation. The arguments that lawyers made and the justifications that judges give for their decisions, he suggested, were post hoc rationalizations.  

Does that mean that for Lewellyn, legal argument is purely confabulatory? There are places where he seems to advance that claim.

But the much more intriguing and I think ultimately true explanation he gives for the practice of reason-giving in lawyerly argument (or just for lawyerly argument) is its power to summon and focus “situation sense”: when effective, argument evokes both apprehension of the governing “situation” and motivation to reach a situation-appropriate conclusion.

Okay. Now what is analogous between lawyering and chick-sexing should be readily apparent.

The capacity of the lawyer (including the one who is a judge) to discern “correct” outcomes as she grasps and manipulates indeterminate legal materials is the professional equivalent of—and involves the exercise of the same cognitive operation as—the chicksexer’s power to apprehend the gender of the day-old chick from inspection of its fuzzy, formless genetalia.

In addition, the lawyer acquires her distinctive pattern-recognition capacity in the same way the chick sexer acquires his: through professional acculturation.

What I do as a trainer of lawyers is analogous to what the chicksexer grandmaster does.  “Proximate causation,” “unlawful restraint of trade,” “character propensity proof/permissible purpose,” “collateral (not penal!) law”—“male,” “male,” “female,” “male”: I bombard my students with a succession of slides that feature the situation types that stock the lawyer’s inventory, and inculcate in students the motivation to conform the results in particular cases to what those who practice law recognize—see, feel—to be the correct outcome.

It works. I see it happen all the time. 

It’s quite amusing. We admit students to law school in large part because of their demonstrated proficiency in solving the sorts of logic puzzles featured on the LSAT. Then we torment them, Alice-in-Wonderland fashion, by presenting to them as “paradigmatic” instances of legal reasoning outcomes that clearly can’t be accounted for by the contorted simulacra of syllogistic reasoning that judges offer to explain them. 

They stare uncomprehendingly at written opinions in which a structural ambiguity is resolved one way in one statute and the opposite way in another--by judges who purport to be following the “plain meaning” rule.

They throw their hands up in frustration when judges insist that their conclusions are logically dictated by patently question-begging standards  (“when the result was a reasonably foreseeable consequence of the defendant’s action. . .  “) that can be applied only on the basis of some unspecified, and apparently not even consciously discerned, extra-doctrinal determination of the appropriate level of generality at which to describe the relevant facts.

But the students do learn—that the life of the law is not “logic” (to paraphrase, Holmes, a proto-realist) but “experience,” or better, perception founded on the “experience” of becoming a lawyer, replete with all the sensibilities that being that sort of professional entails.

The learning is akin to the socialization process that the students all experienced as they negotiated the path from morally and emotionally incompetent child to competent adult. Those of us who are already socially competent model the right reactions for them in our own reactions to the materials—and in our reactions to the halting and imperfect attempts of the students to reproduce it on their own. 

“What,” I ask in mocking surprise, “you don’t get why these two cases reached different results in applying the ‘reasonable foreseeability’ standard of proximate causation?” 

Seriously, you don’t see why, for an arsonist to be held liable for causing the death of firefighters, it's enough to show that he could ‘reasonably foresee’ 'death by fire,' whether or not he could foresee  ‘death by being trapped by fires travelling the particular one of 5x10^9 different paths the flames might have spread through a burning building'?! But why ‘death by explosion triggered by a spark emitted from a liquid nitrate stamping machine when knocked off its housing by a worker who passed out from an insulin shock’—and not simply 'death by explosion'—is what must be "foreseeable" to a manufacturer (one warned of explosion risk by a safety inspector) to be convicted for causing the death of employees killed when the manufacturer’s plant blew up? 

"Anybody care to tell Ms. Smith what the difference is,” I ask in exasperation.

Or “Really,” I ask in a calculated (or worse, in a wholly spontaneous, natural) display of astonishment,

you don’t see why somoene's ignorance of what's on the ‘controlled substance’ list doesn’t furnish a "mistake of law" defense (in this case, to a prostitute who hid her amphetamines in tin foil wrap tucked in her underwear--is that where you keep your cold medicine or ibuprofen! Ha ha ha ha ha!!), but why someone's ignorance of the types of  "mortgage portfolio swaps" that count as loss-generating "realization events" under IRS regs (the sort of tax-avoidance contrivance many of you will be paid handsomely by corporate law form clients to do) does furnish one? Or why ignorance of the criminal prohibition on "financial structuring" (the sort of strategem a normal person might resort to to hide assets from his spouse during a divorce proceeding) furnishes a defense as well?!

Here Mr. Jones: take my cellphone & call your mother to tell her there’s serious doubt about your becoming a lawyer. . . .

This is what I see, experience, do.  I see my students not so much “learning to think” like lawyers but just becoming them, and thus naturally seeing what lawyers see.

But of course I know (not as a lawyer, but as a thinking person) that I should trust how things look and feel to me only if corroborated by the sort of disciplined observation, reliable measurement, and valid causal inference distinctive of empirical investigation.

So, working with collaborators, I design a study to show that lawyers and judges are legal realists—not in the comic-book “politicians in robes” sense that some contemporary commentators have in mind but in the subtle, psychological one that Llewellyn actually espoused.

Examining a pair of genuinely ambiguous statutes, members of the public predictably conform their interpretation of them to outcomes that gratify their partisan cultural or political outlooks, polarizing in patterns the nature of which are dutifully obedient to experimental manipulation of factors extraneous to law but very relevant indeed to how people with those outlooks think about virtue and vice.

But not lawyers and judges: they converge on interpretations of these statutes, regardless of their own cultural outlooks and regardless of experimental manipulations that vary which outcome gratifies those outlooks.

They do that not because, they, unlike members of the public, have acquired some hyper-rational information-processing capacity that blocks out the impact of “motivated reasoning”: the lawyers and judges are just as divided as members of the public, on the basis of the same sort of selective crediting and discrediting of evidence, on issues like climate change, and legalization of marijuana and prostitution.

Rather the lawyers and judges converge because they have something else that members of the public don’t: Llewellyn’s situation sense—a professionalized form of perception, acquired through training and experience, that reliably fixes their attention on the features of the “situation” pertinent to its proper legal resolution and blocks out the distracting allure of features of it that might be pertinent to how a non-lawyer—i.e., a normal person, with one or another kind of “sense” reliably tuned to enabling them to be a good member of a cultural group on which their status depends . . . .

So, that’s what lawyers and chick sexers have in common: pattern recognition, situation sense, appropriately calibrated to doing what they do—or in a word professional judgment.

But now, can you see what the chick sexer and the lawyer don’t have in common?

Perhaps you don’t; because even in the course of this account, I feel myself having become an agent of the intoxicating, reason-bypassing process that imparting “situation sense” entails.

But you might well see it—b/c here all I’ve done is give you an account of what I do as opposed to actually doing it to you.

We know something important about the chick sexer’s judgment in addition to knowing that it is an instance of pattern recognition: namely, that it works.

The chick sexer has a mission in relation to a process aimed at achieving a particular end.  That end supplies a normative standard of correctness that we can use not only to test whether chick sexers, individually and collectively, agree in their classifications but also on whether they are classifying correctly.

Obviously, we’ll have to wait a bit, but if we collect rather than throw half of them a way, we can simply observe what gender the baby chicks classified by the sexer as “male” and “female” grow up to be.

If we do that test, we’ll find out that the chick sexers are indeed doing a good job.

We don’t have that with lawyers’ or judges’ situation sense.  We just don’t.

We know they see the same thing; that they are, in the astonishing way that fascinated Llewellyn, converging in their apprehension of appropriate outcomes across cases that “lay persons” lack the power to classify correctly.

But we aren’t in a position to test whether they are seeing the right thing.

What is the goal of the process the lawyers and judges are involved in?  Do we even agree on that?

I think we do: assuring the just and fair application of law.

That’s a much more general standard, though, than “classifying the gender of chicks.”  There are alternative understandings of “just” and “fair” here.

Actually, though, this is still not the point at which I’m troubled.  Although for sure I think there is heterogeneity in our conceptions of the “goals” that the law aims at, I think they are all conceptions of a liberal political concept of “just” and “fair,” one that insists that the state assume a stance of neutrality with respect to the diverse understandings of the good life that freely reasoning individuals (or more accurately groups of individuals) will inevitably form.

But assuming that this concept, despite its plurality of conceptions, has normative purchase with respect to laws and applications of the same (I believe that; you might not, and that’s reasonable), we certainly don’t have a process akin to the one we use for chick sexers to determine whether lawyers and judges’ situation sense is genuinely calibrated to achieving it.

Or if anyone does have such a process, we certainly aren’t using it in the production of legal professionals.

To put it in terms used to appraise scientific methods, we know the professional judgment of the chick sexer is not only reliable—consistently attuned to whatever it is that appropriately trained members of their craft are unconsciously discerning—but also valid: that is, we know that the thing the chick sexers are seeing (or measuring, if we want to think of them as measuring instruments of a special kind) is the thing we want to ascertain (or measure), viz., the gender of the chicks.

In the production of lawyers, we have reliability only, without validity—or at least without validation.  We do successfully (remarkably!) train lawyers to make out the same patterns when they focus their gaze at the “mystifying cloud of words” that Cardozo identified the law as comprising. But we do nothing to assure that what they are discerning is the form of justice that the law is held forth as embodying.

Observers fret—and scholars using empirical methods of questionable reliability and validity purport to demonstrate—that judges are mere “politicians in robes,” whose decisions reflect the happenstance of their partisan predilections.

That anxiety that judges will disagree based on their “ideologies” bothers me not a bit.

What does bother me—more than just a bit—is the prospect that the men and women I’m training to be lawyers and judges will, despite the diversity of their political and moral sensibilities, converge on outcomes that defy the basic liberal principles that we expect to animate our institutions.

The only thing that I can hope will stop that from happening is for me to tell them that this is how it works.  Because if it troubles me, I have every reason to think that they, as reflective decent people committed to respecting the freedom & reason of others, will find some of this troubling too.

Not so troubling that they can’t become good lawyers. 

But maybe troubling enough that they won't stop being reflective moral people in their careers as lawyers; troubling enough so that if they find themselves in a position to do so, they will enrich the stock of virtuous-lawyer prototypes that populate our situation sense  by doing something  that they, as reflective, moral people—“conservative” or “liberal”—recognize is essential to reconciling being a “good lawyer” with being a member of a profession essential to the good of a liberal democratic regime.

That can happen, too.