follow CCP

Recent blog entries

Do experts use cultural cognition?

In our studies, we examine how ordinary persons -- that is, non-experts -- form perceptions of risk & related facts. But I get asked all the time whether I think the same dynamics affect how experts form their perceptions. I dunno -- we haven't studied that.

But of course I have conjectures.

BTW, "conjecture" is a great word when used in the manner Popper had in mind:  to describe a position for which one doesn't have the sort of direct evidence one would like and could get from a properly designed study, but which one believes in provisionally on the basis of evidence that supports related matters & subject to even better proof of a direct or indirect kind. Of course, every belief should be provisional & subject to more & better proof. But it organizes one's own thoughts & attention to be able to separate the beliefs one feels really do need to be shored up from ones that seem sufficiently grounded that one needn't spend lots of time on them. Also, if people know which of their beliefs to regard as conjectures & habituate themselves to acknowledge the status of them in discussion with others who do the same, then they all can all speak more freely and expensively,  in ways that might help them (maybe by creating excitment or motivation) to obtain better evidence, & without worry that they will mislead or confuse one another.

So -- is expert decisionmaking subject to cultural cognition? 

Yes. And No.

Yes, because to start, experts use processes akin to cultural cognition to reason about the matters on which they are experts. Those processes reflect sensitivity to cues that individuals use to orient themselves within groups they depend on for access to reliable information; they are built into the capacity to figure out whom to trust about what.  

What is different about experts and lay people in this regard -- what makes the former experts  -- is only the domain-specificity of the sensibilities that the expert has acquired in his or her area of expertise, which allow the expert to form an even more reliable apprehension of the content of shared knowledge within his or her group of experts.

The basis of this conjecture is an account of how professionalization works -- as a process that endows practitioners with bridges of meaning across which they transmit shared prototypes to one another that help them to recognize what is true, appropriate & so forth. My favorite account of this is Margolis's in Patterns, Thinking, and Cognition. Llewellyn called this kind of professional insight as enjoyed by lawers & judges "situation sense."  

Maybe, then, we should think of this a kind of professional cultural cognition. Obviously, when experts use it,  they are not likely to make mistakes or to fall into conflict. On the contrary, it is by virtue of being able to use this professional cultural cognition -- professional habits of mind, in Margolis's words --that they are able reliably to converge on expert understanding.

Now a bit of No: Experts when they are making expert judgments in this way are not using cultural cognition of the sort that nonexpert lay people are using in our studies. Cultural cognition in this sense is a recognition capacity -- made up of prototypes and bridges of meaning -- that ordinary people who share a way of life use to access and transmit common knowledge. One of things they use it for is to apprehend the state of expert knowledge in one or another domain; lay people have to use their "cultural situation sense" for that precisely b/c they don't have the experts' professional cultural cognition.

Still, laypersons' cultural situation sense doesn't usually lead to error or conflict either. Ordinary people are experts at figuring out who the experts are and what it is that they know; if ordinary people weren't good at that, they would lead miserable lives, as would the experts.

When lay people do end up in persistent disagreement with experts, though, the reason might well be incommensurabilities in their respective systems of cultural cognition. In that case, the two of them -- experts and lay people -- both lack access to the common bridges of meaning that would allow what experts or professionals see w/ their prototypes to assume a form recognizable in the public's eye as a marker of expert insight. This is another Margolis-based conjecture, one I take from his classic Dealing with Risk: Why the Public and Experts Disagree on Environmental Issues.

Lay people can also fall into conflict as a result of cultural cognition. This happens when the diverse groups that are the sources of  cultural cognition assign antagonistic meanings (or prototypes) to matters that admit of expert investigation. When that happens, the sensibilities that ordinarily enable lay people  to know whom to trust about what become unreliable; the signals they pick up who the experts are & what they know are being masked and distorted by a sort of interference.  This sort of problem is the main thing that I understand our studies of cultural cognition to be about.  

More generally, the science of science communication, of which the study of cultural cognition is just one part, refers to the self-conscious development of the specialized habits of mind -- shared prototypes and bridges of meaning-- that will enable expert knowledge of  lay-person/expert misunderstandings & public conflicts over expert knowledge. The kind of professional cultural cognition we want here will allow those who acquire it not only to understand why these pathologies occur, but also to identify what steps should be taken to treat them, and better yet prevent them from happening in the first place. 

Now some more Yes -- yes scientists do use cultural cognition of the same sort as lay people.

They obviously use it in all the domains in which they aren't experts.  What else could they possibly do in those situations? They might not appreciate that they are figuring out what's true by tuning in to the beliefs of those who share their values. Not only is that invisible to most of us but it is especially likely to evade the notice of those who are intimately familiar with the contribution that their distinctive professional habits of mind make to their powers of understanding in their own domain.

We should thus expect experts -- scientists and other professionals -- to be subject to error and conflict in the same way, to the same extent that lay people are when they use cultural cognition to participate in knowledge (including scientific knowledge) about which they are not themselves experts.  

The work of Rachlinski, Wistrich & Gutherie, e.g., suggests this: they find that judges show admirable resistance to familiar cognitive errors, but only when they are doing tasks that are akin to judging, which is to say, only when they are using their domain-specific situation sense for what it is meant for.

But Rachlinski, Wistrich & Gutherie also have shown that judges can be expected systematically to err in judging tasks, too, when something in their decisionmaking environment distorts or turns off their professional habits of mind.  

So on that basis, I would conjecture that experts -- scientific & professional ones -- will sometimes err, and likely fall into conflict, in making judgments in their own domains when some influence interefers with their professional cultural cognition, & they lapse, no doubt unconsciously, into reliance on their nonexpert cultural cognition.

In that situation, too, we might see experts divided on cultural lines & about matters in their own fields. This is how I would explain work by Slovic & some of his collaborators (discussed, e.g., here) & by Silva & some of hers (e.g., here & here), on the power of differing worldviews and realted values to explain some forms of expert disagreement. But it is notable that they always find that culture explains much less conflict among experts on matters on which they are experts than they & others have found in cases in which there is persistent public disagreement about policy-relevant science.

So these are my conjectures. Am open to others'. And am especially interested in evidence.



The Ideological Symmetry of Motivated Reasoning

On the heels of  the John Bullock article & his amplification of it below,  the ideological neutrality of motivated reasoning came up again in an informative exchange with Howie Lavine during my recent presentation at the University of Minnesota. So I've found myself continuing to ponder the matter.

In our work, we test the hypothesis that cultural cognition -- a species of motivated reasoning that reflects the impact of group values on perceptions of fact -- is responsible for conflicts over scientific evidence on issues like climate change, the HPV vaccine, & gun control (and for conflicts over non-scientific evidence on many legal issues, too). The hypotheses assume that those on both sides of such debates are being affected by cultural cognition, and our data seem to reflect that.

But at least some social scientists have been advancing the claim that motivated reasoning in politics is more characteristic of (or maybe even unique to) conservative ideology. Essentially, these researchers are reviving the "authoritarian personality" position associated with Adorno. The most prominent of these neo-Adorno-ists is John Jost (see herehere & here, e.g.). 

I tend to doubt that motivated reasoning is ideologically lopsided. What's more, I tend to believe that even if the effects are not perfectly uniform across the ideological continuum (or cultural continua; we use two dimensions of value in our work as opposed to the single "liberal-conservative" one that Jost and others use), the impact of motivated reasoning is more than large enough at both ends to be a concern for all.

But I acknowledge the issue of "motivated reasoning asymmetry" is an open one, and agree it is worth investigating.

Obviously, the investigation should consist in empirical testing. But there must also be attention to theory, which is necessary to tell us what we sort of evidence is relevant, and hence how tests should be constructed and interpreted.

To that end, I offer some thoughts on a couple of the theories that might result in contrary predictions on the asymmetry thesis & what they suggest about empirical testing of that claim.

As I read Jost and others, the asymmetry position grounds motivated reasoning in a general propensity (a personality trait, essentially) toward dogmatism that tends toward a conservative (or "authoritarian") political orientation. On this account, we shouldn't expect to see motivated reasoning among liberals, whose ideology is itself a reflection of their propensity toward open-mindedness.

In contrast, the symmetry position (as reflected in cultural cognition and related theories) sees ideologically motivated reasoning as simply one species of identity-protective cognition. As developed by Sherman & Cohen, identity-protective cognition refers to the dismissive reaction that individuals form toward information that threatens the status of (or their connection to) a group that is important to their identity.  "Democrat" and "Republican" (along with hierarchy and egalitarianism, communitarianism and individualism, in cultural cognition) are both group affinities of that sort, and so both create vulnerability to motivated cognition.

Simple correlations of the extent of motivated reasoning with partisan identity or ideology (or cultural worldviews) furnish the most obvious way to test the asymmetry thesis but are unlikely to be conclusive because of their modest magnitudes and their variability across studies (such asymmetries in lab studies will also raise toughter-than-usual external validity questions). One nice thing about specifying the  theories in this way, we can expand the search for evidence that gives us more or less reason to accept or reject the asymmetry thesis. 

E.g., if personal self-affirmation works to reduce resistance to ideologically noncongruent information among both liberals & conservatives, Republicans & Democrats--that, in my mind, counts as reason to be skeptical of asymmetry. The effect of self-affirmation is evidence that the source of the motivated reasoning at work is identity-protective cognition; there's no reason to expect self-affirmation to have any effect in mitigating motivated reasoning that arises from a generalized disposition toward dogmatism.  And, btw, we already know self-affirmation reduces the resistance of liberal Democrats as well as conservative Republicans to ideologically noncongruent information. See here & here, for example.

Also: If we see ideologically motivated reasoning operating through sensory perception, that's a reason to be skeptical of asymmetry too. The neo-Adorno-ist dogmatic personality theory addresses responses to arguments and evidence that bears argumentatively on political positions; it is about closed-mindedness not sensory blindness. Identity-protective cognition doesn't make any claim that self-defensiveness will be limited only to assessments of arguments, and so can fit motivated reasoning effects in sight & other senses.  Research using cultural cognition has shown that motivated reasoning can generate polarization of individuals of all values when they observe video of politically charged events  (e.g., abortion-clinic vs. miltitary-recruitment center protests or high-speed police car chases). 

Lastly, if we can parsimoniously assimilate motivated reasoning in politics to a larger theory of motivated reasoning, then we should prefer that account to one that posits a patchwork of local motivated reasoning dynamics of which ideologically motivated reasoning is one. Identity-protective cognition offers us that sort of parsimony: individuals are known to react defensively against information that challenges diverse group identities -- like being the fan of a particular sports team or a student of a particular university -- and not only against information that challenges partisan or ideological identities.  The neo-Ardon-ist dogmatic personality theory doesn't explain that (although it does seem to me that Yankees fans are very closed minded & authoritarian).  Thus, more evidence, I think, for the symmetry position.

More but not conclusive evidence. For me, the question is, as I said, very much an open one.  Also, I don't mean to say that identity-protective cognition & the dogmatic-personality theories are the only ones to consider here.

The only point I am trying to make is that we are likely to get further in answering the question if we think about it in conjunction with theories of motivated cognition that offer competing predictions about symmetry and other things than if we just gather up studies & ponder correlations.

Or to put it more concisely, and on the basis of a (profound) truism from the philosophy of science: No theory, no meaningful observations.



Political psychology of misinformation at University of Minnesota

Did talk at this event, which was sponsored by University of Minnesota political science department. Here are the slides (see below for summary of what I was planning to & then did end up saying). My fellow panelists, Brendan Nyhan and Dhvan Shah, gave great talks, as did U of M's faculty commenter Paul Goren, who previewed some work he has been doing on the basic policy-choice competence of citizens who are low in political knowledge, as that concept is understood & measured in political science. It was clear that the  political psychology program there, which consisists in scholars from political science, communication & psychology, is radiating insight and passion.


Polluting the Science Communication Environment

I've been invited by the University of Minnesota political science department to make a presentation on the "political psychology of misinformation." Am mulling over what to say (have till 2:00 pm tomorrow, so no rush) & was thinking something along the lines of

  1. misinformation isn't really much of a problem unless antagonistic cultural meanings have become attached to an empirical claim about some fact that admits of scientific investigation;
  2. when such meanings have taken root, accurate information won't by itself do much good; and
  3. therefore the kind of misinformation to worry about is public advocacy that needlessly ties policy-relevant factual issues to antagonistic cultural meanings. 

Climate change is the obvious example of 3: hierarchical-individualist activists warn that concerns over it are a smoke screen to conceal a plot to overthrow capitalism,  while egalitarian-communitarian ones profer climate change as evidence of the destructiveness of capitalist greed that necessitates severe restrictions on technology & markets. The positions are reciprocal -- by supplying vivid examples of exactly the the mindset the other fears, each one actually advances the other's cause at the same time that it advances its own.

But nanotechnology risk concern furnishes an even nicer example, I think. It is, of course, sensible to investigate whether nanotechnology is hazardous, but at this point at least there's no meaningful scientific evidence that it is. Yet that hasn't stopped some advocacy groups from noisly clanging the alarm bells. Indeed, one sponsored a contest for the "best nano-free zone" symbol, with the winner to emblazoned on t-shirts, bumper stickers, etc. The contest drew some 482 entrants.

Eighty Percent of the public hasn't even heard of nanotechnology yet. This is a great way to make sure that their first exposure connects nanotchnolgoy up with politicized issues like climate change and nuclear power. This strategy for creating cultural polarization, CCP found in an experimental study, has an excellent chance to succeed.  Good to think ahead, too, since eventually climate change, like nuclear, might lose its power to divide -- and then who would need the "public interest" groups dedicated to protecting us from trying to the prospect that our cultural enemies will erect their worldview into a political orthodoxy?!

This might not be "misinformation" in the sense that the symposium sponsors have in mind -- but it is the sort of behavior that makes the public receptive to misinformation and impervious to sound science.  It is a toxin, really, in the communication environment that democracies depend on for reliable transmission of scientific knowledge to their citizens.


Democratic v. Republican Cognition

Had chance to look closely at the fascinating paper Elite Influence on Public Opinion in an Informed Electorate, American Political Science Review 105, 496-515 (2011) by my colleague John Bullock over in the Yale political science dep't.

The principal finding of the studies reported on in the article is that members of the public who identify themsleves as Demcorats and Republicans (it is important to recognize that 30% or so do not; they are independents or others) are guided less by partisan cues (in the form of the positions of elite with recognizable partisan identities) than they are by policy substance when considering new policy proposals. This is contrary the usual account of mass opinion found in political science. 

But to me, at least, the most interesting finding was one relating to "need for cognition" (NFC), a measure of the individual dispositon to engage in open-minded and effortful engagement with information.  The idea that partisan cues guide opinion predicts that cues will be even more important for low NFC individuals, who tend to use heuristic reasoning (System 1 in Kahneman terms), than than they are for high NFC ones, who can be expected to use systematic reasoning (Kahneman's system 2). Bullock found this pattern in Democrats -- that is, the ones who were high in NFC paid even more attention to policy content and less to cues than Democrats who were low in NFC. But he found the opposite for Republicans: ones who were high in NFC paid more attention to cues and less to policy content. This was totally unexpected by Bullock, who, in line with his hypothesis that reliance on cues was overstated, expected NFC not to matter very much (it didn't at all, but only if one ignored the interaction with party).

What sort of (admittedly post hoc) interpretation might we place on this finding? Some might see it as supporting the position that ideologically motivated reasoning is more characteristic of conservatives than liberals.  John Jost advances this argument in many papers, and   Chris Mooney apparently argues for it in his forthcoming book, which I'm eager to read.  Democrats, on this view, are thinking things through, Republicans reflexively adhering to ideological cues.

I don't find the "motivated reasoning asymmetry thesis" convincing. It seems to me that the balance of the evidence on politically motivated reasoning (including our own work on cultural cognition; see, e.g. "Saw a Protest") suggests that the tendency to fit perceptions of fact to one's ideological predispositions is pretty much uniform across the political spectrum (or in our work, cultural spectra). 

Bullock's finding -- as truly fascinating as it is -- is in fact ambiguous in this regard. It does seem that high NFC Democrats are paying more attention to information content than high NFC Republicans, who are focusing instead on cues. But it is question begging (or in the case of the asymmetry thesis, conclusion assuming) to think that Republicans are thus displaying motivated reasoning. Indeed, since the ones in question are high in NFC, why imagine that the Republican study subjects are processing information heuristically--or unconsciously fitting their positions to cues or anything else--when they go with the partisan elite's position? It is possible that both the high NFC Democrats and the high NFC Republicans are both using systematic (conscious, high-effort information processing) -- but for different ends. Democrats might be interested in trying to figure out what information fits their values best, in which case those with high NFC would turn their attention to information content rather than being guided (consciously or unconsciously) by partisan cues. Republicans, in contrast, might value taking the position that expressed their identity or advances their group ends more, in which case those high in NFC would consciously view the position of party elites as the more important piece of information.

It is true that Republicans would be "more partisan" on this account (one could also say Democrats are more "ideological" in some sense -- that is, more focused on advancing their values than on promoting the cause of their party). Maybe some would think that is an unattractive thing (I'm not sure; I think ideological zealotry can also be worrying in many contexts). 

But the point is that one could not, on this account, say Republicans are more prone to motivated reasoning.  We can't say because we don't know what they (or the Democrats) are trying to get out of the information here.

This point generalizes: it is impossible to say anything about the quality of cognition that individuals display unless one knows what they are trying to accomplish.  Too often in psychology, individuals who are using heuristic processing or even motivated systematic reasoning are viewed as irrational when in fact those forms of information processing are reliably advancing their interest in adopting stances that express their group identities. This is the main point of our paper on the "tragedy of the risk perceptions commons" and political conflict over climate change.

In any case, I hope Bullock is motivated (consciously or otherwise) to investigate further.



Talk at AGU 2011 Conference

Did my talk on “Cultural Cognition, Climate Change, and the Science Communication Problem”  at AGU annual meeting in SF today. Slides here.

The panel was lots of fun & the other panelists —    including USA Today’s excellent science reporter Dan Vergano, ocean scientist and marine sexologist Ellen Prager, and Molly Bentley of the Big Picture Science show — gave great talks & were really interesting to talk to. It was also an amazing honor to be involved in an AGU-sponsored event.


In Search of -- Forensic Science Literacy

I’ve been asked to be part of an NAS working group that will develop a proposal on how science should figure in the training of lawyers. I’m going to put together a memo that outlines my own initial views and distribute it shortly before the first meeting (in mid January). Below is a condensed account of the points and themes that my memo will stress. But my ideas are provisional & formative; indeed, I share them to invite your reactions, which I expect to stimulate and educate my own thinking.

I welcome feedback not only on the substance but also on what to include in an annotated bibliography, the germ of which appears after the narrative section. The bibliography is not meant as a syllabus for a course; some of the items would no doubt be assigned in the sort of “forensic science literacy” course I am describing, but mainly I am trying to compile sources that help make the spirit & philosophy of such an offering more vivid for memo readers.

Feel free to respond via email to me (

A. General Points

1. What the aim should be—and what it shouldn’t

The 2009 NAS Forensic Science Report did more than identify various forms of proof that lack scientific validity. It also demonstrated that the U.S. legal system is suffused with a basic incomprehension of the fundamentals of sound science. The prospect that this deficit would continue to make the law receptive to specious forms of scientific evidence and unreceptive to valid ones motivated the Report’s core recommendation that the Nation’s universities be made instruments for bringing the “culture of science to law.”

Spelling out what law schools should be expected to contribute to this project is, in my view, the proper focus of the working group’s attention. Lawyers don’t need to be trained to do science but they can and should be taught to recognize what constitutes sound forensic science and what doesn’t. A model course should instruct students in the general concepts and procedures that one must understand in order to perform this recognition task reliably, including principles of validity; elements of probability; and methods of inquiry (more on these below). The goal should be to create an intellectual foundation broad and stable enough to support understanding of any particular type of legally relevant scientific material.

The aim of the working group should not be to try to compile a list of important current or future types of forensic science (e.g., fingerprints or neuroscience) or specific areas of study relating to the forensic process (e.g., reliability of witness identification or the pervasiveness of cognitive biases). These are matters that one would certainly imagine as the focus of either a more comprehensive or more advanced course in law and science, and certainly the greater the number of offerings law schools provide on law and science, the better. But the most critical objective is to identify the core offering (or core curricular content) that every programmust include. 

By confining its focus to what is in fact essential, the proposal will underscore the theme that U.S. law schools must treat imparting forensic science literacy as an essential part of their curricula. Lawyers and judges who possess basic forensic science literacy can be expected to handle competently whatever particular forms of scientific proof they must deal with; ones who lack this capacity cannot be expected to handle any well. 

2. Principles of validity

Here I have in mind the concepts essential to systematic evaluation of the soundness of any general form of scientific inquiry or any particular application of it. These include validity proper: do the methods and design employed genuinely support the inferences that the researcher seeks to draw (internal validity), and from those can one draw reasonable inferences about the real-world phenomena that are being modeled by the study (external validity)? Are the measures employed reliable: do they generate consistent results, and do results agree across trials and researchers? The topic of causal inference is also usefully considered together with these issues, as is the concept of hypothesis testing.

The goal is to make students acquainted with the sorts of criteria that those who reliably distinguish sound from unsound science use for that purpose. I doubt that forensic science literacy as a reliable capacity to recognize sound and unsound forms of science as applied to law can be reduced to any sort of checklist of do’s & don’ts, rights & wrongs. But the elaborated development of a set of criteria for “valid” forensic science is likely a sensible way, pedagogically speaking, to conjure the sort of atmosphere in which such a capacity can be acquired and refined.
Such instruction can easily be illustrated with legal examples because these are exactly the sorts of considerations an incomprehension of which is reflected in the practice of forensic science that the 2009 Report criticizes.

3. Elements of probability

Concepts of probability animate the methods and testing strategies of science (and ultimately the philosophy of competing conceptions of scientific understanding, although that’s a depth the forensic- science-literate lawyer needn’t reach unless he or she is drawn there by curiosity). But, again, forensic- science-literate lawyers don’t need to be trained to do sound science, only to recognize it. For this purpose, it is sufficient for them to be attain, first, a conceptual grasp of the basic elements of probability (e.g., normal distributions and standard deviation; nonnormal distributions, such as “survival” curves; measurement error, sampling error, and estimation; p-values and confidence intervals; Bayes’s Theorem and Bayesian inference) and, second, enough fluency with statistics to be able to read and comprehend the terms in which empirical results are ordinarily reported. They should also be made familiar with those characteristic shortcomings of unsound science that consist in an absence of genuine comprehension, as opposed to mechanical application, of statistical procedures. Once more, the law is filled with practical illustrations. 

4. Methods of inquiry

The idea here would be to make students familiar with the conventional sorts of methods that will inform the sorts of empirical work they are likely to encounter as lawyers. These include, at a high level of generality, observational vs. experimental approaches; but at a more particular level, it would be useful, too, to supply students with the materials necessary to enable informed and critical reflection on specific methods that bear on important, domain-specific matters of inquiry (e.g., clinical trials and “blinded” experimental methods, “laboratory” vs. “field” experimentation; multivariate regression vs. “matching” for observational studies). Such instruction can usefully be guided by the objective of making prospective lawyers familiar with the characteristic limitations of studies that employ one or another method—ones associated not just in the misapplication or inappropriate uses of one or another method but also ones with the inherent imperfection of all testing strategies. 

Of course lawyers should also be taught that precisely because all methods are imperfect, it is a mistake—a popular misconception that reflects science illiteracy— to equate scientific validity with the conclusive or final resolution of an issue, or even with proof that in itself satisfies any particular legal standard such as “beyond a reasonable doubt.” No more is or can be expected of forensic proof than that it supply a decisionmaker with more evidence for believing (or disbelieving) a proposition than she otherwise would have had (and of course forms that supply anything less than that should not be tolerated).

B. Annotated bibliography

Useful sources. Possible course materials but mainly sources that illustrate or reflect the points above

1. Principles of validity

sources, legal:

National Research Council (U.S.). Committee on Identifying the Needs of the Forensic Science Community., National Research Council (U.S.). Committee on Science Technology and Law Policy and Global Affairs. and National Research Council (U.S.). Committee on Applied and Theoretical Statistics. Strengthening Forensic Science in the United States: A Path Forward, (National Academies Press, Washington, D.C., 2009) —relevant for all really

Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) (suggesting that principles of validity should be normative for evaluation of admissibility of expert proof)

Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999) (just kidding!)

United States v. Llera Plaza, 179 F. Supp. 2d 492 (E.D. Pa., Jan. 7, 2002) (holding on basis of brilliant application of the principles of validity that fingerprints are not and hence fingerprint experts should not be permitted to give conclusions on “matching” prints)

United States v. Llera Plaza, 188 F. Supp. 2d 549, 576 (E.D. Pa., March 3, 2002) (oops, nevermind!)

sources, nonlegal:

Curious what people would recommend here. Is there something for understanding of basic concepts of scientific validity that is as accessible and compact as say Abelson’s Statistics as Principled Argument, below?

2. Elements of probability

sources, legal:

Finkelstein, M.O. and Fairley, W.B. A Bayesian Approach to Identification Evidence. Harvard Law Review 83, 489-517 (1970).

Finkelstein, M.O. Basic concepts of probability and statistics in the law, (Springer, New York, 2009).

Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011) (recognizing that significance testing for scientific studies is not a criterion of practical significance for causal inferences relating to law) 

sources, nonlegal:

Abelson, R.P. Statistics as principled argument, (L. Erlbaum Associates, Hillsdale, N.J., 1995).

Gigerenzer, G. Calculated Risks: How to Know When Numbers Deceive You (Simon and Schuster, New York, 2002).

Motulsky, H. Intuitive biostatistics : a nonmathematical guide to statistical thinking, (Oxford University Press, New York, 2010).

3. Methods of inquiry

sources, legal:

Fisher, F.M. Multiple Regression in Legal Proceedings. Colum. L. Rev. 80, 702-736 (1980).

sources, nonlegal:

Again, eager for suggestions here. There are lots of good “handbooks” for social science methods; but is there something that is more general, yet accessible and compact (again, compare Abelson)


Do more educated people see more risk -- or less -- in climate change?

The answer is neither. Education level has a correlation pretty close to zero (r = -0.02, p = 0.11) with climate change risk perceptions.

I measured the assocation using the data from a nationally representative sample of approximately 1,500 Americans.

The data were collected by the Cultural Cognition Project as part of an ongoing study of science literacy, numeracy, & risk perception.  In results that we describe in a working paper, science literacy and numeracy also have very minimal impact on perceptions of climate change -- assessed independently of cultural worldviews. Once cultural worldviews are taken into account, then the impact of science literacy & numeracy on climate change risk perceptions depends on peoples' cultural orientations: as they get more science literate & numerate, egalitarian communitarians see more risk, but hierarchical individualists even less. 

Or in other words, enhanced science literacy & numeracy are associated not with convergence on any particular view (supported by science or otherwise) but with greater cultural polarization.

Now education level, in contrast, is not associated with greater climate change polarization. If you want to fit your perceptions of risk to your values, you need to do more than go to college. You have to study really hard in math & science!

Actually, I'm sounding much more cynical here than I mean to. As we discuss in the paper, this pathology isn't intractable -- but if one doesn't even know that cultural polarization increases as science literacy does or why, then the problem is unlikely to go away. 


Market Consensus vs. Scientific Consensus

Some smart researcher should invent a market measure of belief in climate change

An index could be constructed that reflects things like investements in climate change adaptation, investments in new business opportunities created by change in climate, and offering of & changes in price for insurance against adverse impacts.

As the price of the index rises (or falls!) it would be evidence of market consensus on climate change. Market pricing is relevant to trying to figure out lots of things, obviously (indeed, there is a cottage industry  in academia now to create prediction markets to compete with other types of predictive models). But the value of a market consensus measure here is cultural, too: for some citizens, market consensus will have a positive cultural resonance that scientific consensus (at least in this context) lacks. They could be expected, then, to give information from the market more engaged & open-minded attention.

People culturally disinclined to pay attention to markets as information might pay more attention to this one, too, and thus learn about the value of being more open-minded about information sources.

Last but not least, having a market measure of belief in climate change would be great for people trying to investigate dynamics of science communication -- for all the reasons I just gave.


Politics, Cognition & Pepper Spray

Does “pepper spray” really hurt? The answer probably depends on the relationship between the ideology of the person who was sprayed and the ideology of the person asking/answering the question.

There is an internet buzz emerging over the suggestion by Fox news commentators & equivalent that “pepper” spray (it’s orders of magnitude more irritating than habanero) isn't all that painful. The debate is politically polarized along predictable lines.

If the demonstrators who were sprayed had been protesting abortion rights outside an abortion clinic, would there be an ideological inversion of the perceptions of how much the spray stings?

The answer is that we are unlikely even to get to that point in the discussion before we are already tied in knots over other facts relating to the behavior of the protesters and the police.


My colleagues at the Cultural Cognition Project and I did a study in which we instructed subjects to view a videotape of a protest that (we said) was broken up by the police to determnine if the protestors had crossed the line between “speech” & intimidation. Our subjects said "yes" or "no" -- said they saw shoving, blocking or only exhorting, persuading -- depending on the subjects' own values & what we told them the protest was about & where it was taking place: an anti-abortion demonstration outside an abortion clinic; or an anti- don't/ask/don't/tell protest outside a college recruitment center.

This is an example of “cultural cognition,” the tendency of people to conform their view of legally relevant facts to their group values. It’s a big problem for law — not just because these dynamics could affect juries & judges but also because they generate divisive conflict over the political neutrality of the law. I wrote a long law review article about this problem recently but I admit (as I did there) that I don’t think there is any easy solution to it.

But here is one thing concerned citizens might do to try to counteract this dynamic. When they see something unjust like UC Davis incident, try to look & find out if the same injustice has been perpetrated against others whose political views are different from one's own -- & complain about both.

I looked for stories on abortion protesters being "pepper" sprayed. Found some, but not many. Either anti-abortion protesters don't get sprayed as often (in absolute terms) as Occupy Wall Street & anti-war protesters or the spraying doesn't get reported as often, perhaps because of the impact of cultural cognition in reporting of news (the facts that get reported are the ones we are predisposed to believe) . . . .

reposted from Balkinization


Two Communication Sciences for Plata's Republic

gave a talk last night at Harvard Law School in connection with the Supreme Court Foreword. Below is an *outline* of points I made. It is *not* text of my talk; I spoke extemporaneously & merely used the outline as something to think about as I thought about what to say in afternoon. (Maybe I'll try to remember what I said--was not nearly so dense as this-- & write it down, but I doubt it!) "Plata's Republic" is play on case Brown v. Plata in which Scalia's dissent looks motivated reasoning in the eye & proclaims it the truth of the role of empirical claims in democratic policy deliberations (I think the most surprising thing I've ever seen in U.S. Reports).


            1.  My basic claim is that political conflict over the neutrality of the Supreme Court is generated by psychological dynamics unrelated to whether the Justices are genuinely partisan or whether genuine neutrality is possible. That is, such conflict can be fully explained even assuming that neutrality is meaningful and that the Court is an acceptably neutral decisionmaker.  If such conflict is undesirable—as I submit it is—then we must perfect our understanding of nature of these dynamics and of how to control them.

            2.  We can make sense of these dynamics by considering political conflict over policy-relevant science. Valid science does not publicly certify itself: because citizens are not in a position to reproduce scientific findings on their own, they must necessarily rely on social cues  to certify for them what insights have been genuinely established through the use of valid scientific means. As a result of  motivated reasoning,  diverse groups of citizens will often construe those cues in opposing ways.   When that happens,  there will be political conflict over science notwithstanding its validity and notwithstanding the political impartiality and good faith of scientists. The existence of such conflict, moreover, will impede adoption of policies that effectively promote ends—including public health, national security, and economic propserity—that diverse citizens agree are the appropriate objects of law.

             3.  The dynamics that generate political conflict over the Supreme Court’s constituitional decisionmaking are exactly the same ones that generate political conflict over policy-relevant science. Just as they cannot verify the validity of science on their own, so citizens cannot verify the neutrality of constitutional decisionmaking on their own; they must rely on social cues to certify the validity of such decisionmaking. In this context, too, motivated reasoning will often drive citizens of diverse values to diverge in their assessments of what those cues mean.  Politically diverese citizens will disagree about the neutrality of constitutional decisionmaking in such circumstances despite the impartial application of valid doctrinal rules for enforcing the state’s obligation to be neutral in the manner that citizens of diverse values agree it should be. Such disagreement, moreover, will itself vititiate the value of the impartial application of those doctrines insofar as the benefit of neutrality consists largely in public confidence that the law is not imposing on them obligations incompatible with respect for the freedom of diverse citizens to pursue happiness on terms of their own choosing.

            4.  Both of these problems—political conflict over policy-relevant science and political conflict over constitutional law—reflect communication deficits.  The impediment that political conflict poses to the adoption of informed policies is the price we pay for failing to recognize that d oing valid science and communicating the validity of science are entirely different things.  Likewise, some portion of the toll that political conflict over Supreme Court neutrality exacts from our experience of liberty—likely a very large portion of it—reflects our failure to recognize that doing netural decisionmaking and communicating it are entirely different things too.  How to shield public policy deliberations from the recurring influences—accidental and strategic—that trigger culturally motivated reasoning with respect to both policy-relevant science and constitutional neutrality are both matters that admit of and demand scientific investigation in their own rights.

            5.  Developing these sciences—fixing the communication failures of Plata’s Republic—is a mission that lawyers, and the institutions that train them, are ideally situtated to address.  It is a central part of the lawyer’s craft to match the content of information with the cultural cues (the social meanings) that enable its comprehension and that vouch for its credibility.  Our experience with and sensitivity to this dimension of effective communication can thus help to remedy the sad and costly inattention to it reflected in public policy discourse.   Moreover, because a training in law always has been and continues to be a form of preparation for the exercise of significant civic responsibility—we educate Presidents, after all, as well as Supreme Court Justices and Supreme Court advocates—it is perfectly natural that law schools should play a role in perfecting the science of science communication.  It is all the more obvious that they are the natural location to address the judiciary’s own peculiar and ironic neglect of the fit between its professional conventions for doing neutral law and the cues that communicate constitutional neutrality.  Not only are we ideally positioned to promote scientific inquiry into what effective neutrality communication demands; we are uniquely empowered and responsible for implementing what such investigation can teach us through the self-conscious and enlightened cultivation of our profession’s norms.


Cognitive Illiberalism & Neutral Principles of Constitutional Law

Harvard Foreword on motivated cogniton & constitutional law is now published. Basic argument is that the same interplay of cognitive & political dynamics that polarize Americans over climate change & other risk issues polarize them over the neutrality of the Supreme Court. Judges need help from communication science just as much as scientists do (although at least some Justices bear more responsibility for the communication problem in law than any scientist I can think of does for the one in public deliberations over risk regulation). There are two very thoughtful replies, one by Mark Tushnet & the other by Suzzana Sherry. I'll have to think their arguments over & see whether & how my position changes.


Saw a Protest Slide Show

My slides from CELS 2011 presentation of Saw a Protest. Best ones are definitely ## 42-46!


Profiles of Risk Perception: Normality & Pathology

A friend asked me if I could supply him with graphic representations of data that illustrate the bimodal-- i.e., culturally polarized -- state of risk perceptions over climate change & contrast that distribution with a "normal" -- nonpolarized -- one on some other risk or issue. So I put together this:  

The bottom histogram is the bimodal cultural distribution for perceptions of climate change risks. The top histogram is the normal distribution for nanotechnology risk perceptions.  I selected nanotechnology as the comparison case not only because perceptions of its risk are not polarized but also because there is nothing that guarantees that they will stay that way. Indeed, in our study Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009), we used nanotechnology risk perceptions to test the hypothesis that  that cultural predispositions can induce biased assimilation & polarization when people are exposed to information about a novel risk, one about which they had little if any prior knowledge and on which they were not polarized prior to information exposure:


In sum:

(1) the top histogram is picture of a (deliberatively) "healthy" distribution of risk perceptions;

(2) the bottom histogram is a picture of a "pathological" one; and

(3) among the goals of the science of science communication should be to learn to identify risk  sources that are vulnerable to becoming infected with this pathology -- as nanotechnology  evidently is -- and to perfect techniques for building up their resistance to it (techniques for  treating pathologies is critical too-- but it is a lot harder, I think, to change polarizing meanings  than it is to stifle their formation). 




Science communication isn't soulcraft (or shouldn't be)

President Obama has recently been taking heat from environmentalists, most conspicuously Al Gore in a recent Rolling Stone essay, for not using his “bully pulpit” to force the public to attend to the threat posed by climate change. “By excising ‘climate change’ from his vocabulary,” said one critic, “the president has surrendered the power that only he has to explain challenging issues and advance complex solutions for our country.

I definitely agree that President Obama should be taking the lead to improve public comprehension of climate change science. But I suspect I have a very different opinion on what the President should be trying to communicatealso how and when. What the public needs, in my view, is not more information about climate change, but a new, more inclusive set of cultural idioms for discussing this issue.

My argument will be easier to understand if I start by describing a national public opinion study conducted by the Cultural Cognition Project, a research consortium of which I am a member. There were two principal findings.
  • First, public controversy is strongly associated with differences in cultural or group values. People who subscribe to an individualistic, pro-market worldview tend to see climate change risks as small, while people who subscribe to an egalitarian, wealth-redistributive worldview tend to see them as large.
  • Second, differences in science literacy (how knowledgeable people are about basic science) and numeracy (a measure of their facility with quantitative, technical reasoning) magnify cultural polarization. As egalitarians become more scientifically literate and numerate, their concerns grow even larger; as individualists become more scientifically literate and numerate, their concerns diminish all the more. (For this reason, levels of science literacy and numeracy have essentially no meaningful impact overall).

These data suggest that conflict over climate change, far from reflecting a deficit in public comprehension of scientific information, demonstrates how adept people are in forming beliefs that express their group commitments. Should that surprise anyone? Right or wrong, the risk perceptions of an ordinary individual won’t actually affect the climate: the contribution an individual makes to carbon emission levels by her personal behavior as a consumer, or to climate change policymaking by her personal behavior as a voter, is just too small to matter. If, however, an individual (whether a university professor in Massachusetts or an oil-rig worker in Oklahoma) forms a belief about climate change that is heretical within her community, she might well forfeit the friendship and respect of people she depends on most for support in her everyday life.

Because it’s in the rational interests of ordinary people to conform their beliefs to those that predominate in their cultural groups, it’s also not surprising that science literacy and numeracy magnify cultural polarization. People who know more about science and have a greater facility with technical reasoning can use those skills to find even more evidence that supports their culturally congenial beliefs.

Of course, if we all follow this strategy of belief formation simultaneously, the collective outcome could be a disaster. I’m not hurt when I adopt a belief that “fits” my values but that is wrong, as a matter of scientific fact; but I and many others might well suffer harm if society adopts policies that don’t reflect the best available science about consequential societal risks. Because we live in a democracy, moreover, the risk that society will fail to adopt scientifically enlightened policies goes up as individuals of diverse cultural affiliations form the impression that it is in their expressive interest to adopt beliefs that affirm their groups’ values over their rivals’.

So back to President Obama and his role in the climate change debate. I think it is one of his
Administration’s responsibilities to foster a science communication environment that spares us from these sorts of tragic conflicts between individual expressive interests and collective welfare ones.

When our leaders talk about risk, they convey information not only about what the scientific facts are but also what it means, culturally, to take stances on those facts. They must therefore take the utmost care to avoid framing issues in a manner that creates the sort of toxic deliberative environment in which citizens perceive that the positions they adopt are tests of loyalty to one or another side in a contest for cultural dominance.

Where, as is true in the global warming debate, citizens find themselves choking in a climate already polluted with such resonances, then leaders and public spirited citizens must strive to clean things up—by creating an alternative set of cultural meanings that don’t variously affirm and threaten different groups’ identities.

 In that sort of environment, we can rely on the trust in science and scientists common to the overwhelming majority of cultural communities in our society to guide citizens toward acceptance of the best available science—much as it has on myriad other issues so numerous, so mundane (“take penicillin for strep throat”; “use a GPS system to keep from getting lost”) that they are essentially taken for granted.

 In his Rolling Stone essay, Al Gore calls the debate over climate change a “a struggle for the soul of America.” He’s right; but that’s exactly the problem. In “battles” over “souls,” citizens of a diverse, pluralistic society will naturally disagree—intensely. We’d all be better off if the issue had never come to bear connotations so fraught. Obama’s primary science communication task now is to lower the stakes.

 It won’t be easy. But any progress will depend indispensably on respecting the separation of science communication from soulcraft.

 President Obama, at least, seems to actually get that.


Prison Overcrowding, Recidivism & Crime

I was one of many, many experts contributing to briefs to the Supreme Court on this case.  In a 5-4 decision, the Court upheld a decision to require California to reduce the number of prisoners to a number that the state itself deemed safe for inmates.  Part of the Supreme Court's calculus involved weighing potential risks and benefits to public safety involved.  The majority cited expert testimony (based on numerous studies) that lowering prison populations may, on net, enhance public safety.   


More Immigrants, Less Crime? 

Like seemingly every other major cultural flashpoint (guns, the death penalty, and even abortion), both sides of the immigration debate have seized on anti-crime arguments.  No one in the mainstream debate disputes that immigrants, on average, are less likely to commit crimes than native-born citizens, but I doubt that is very convincing to supporters of the new immigration law.  There have also been several high-profile crimes committed by immigrants in Arizona, though I doubt those have swayed opponents of the new law.  I suspect that, as with other debates about the sources of crime, the evidence is culturally loaded enough to make it hard for anyone who feels passionately about the issue to process contrary information.  On the bright side -- and unlike gun control, capital punishment and abortion law -- nearly everyone agrees that immigration reform is needed.  There also used to be a number of Republicans like McCain who campaigned on the issues.  There's no predicting how the issue will play out this round, but I doubt that arguments about crime are unlikely to be decisive.  It does, however, provide a rich field for anyone interested in doing empirical research into the way cultural cognition shapes receptivity to arguments and information about immigration! 


Cultural Cognition and the Supreme Court

The NYT has an interesting op ed by Charles M. Blow today.  What I find most interesting isn't the notion that opposition to abortion is waxing, but the way this appears to be tied to attitudes about the Supreme Court.  Here's a little clip from the side graph to the article.  

Basically public perception appears to have reversed course after Obama was elected, with more Americans thinking that the Court is more liberal now that Obama has been elected and Sotomayor appointed.  While there are some interesting theories about justices trending liberal over their tenures, I suspect that more obsessive SCOTUS watchers would, whether they are happy or upset by it, say that the Court has either maintained its ideological balance or trended conservative in recent years.  

Why does public perception data seems to trend the other way?  It's a small change, to be sure, but I wonder if perceptions about the Court aren't the product cultural cognition.  If so, then it would make sense that people who think of the country as a whole as becoming more liberal under Obama as thinking that the Court, too, has become more liberal.  As a cultural touchpoint, it would be disconcerting to people at both ends of the ideological spectrum to think that Obama has had no impact on the ideology of the Court or -- even more disconcerting for ideologues on both sides -- that the Court may trend conservative in his administration.  Just a theory for now -- we'd need more data to test it. 


NPR Story on Climate Change

Christopher Joyce has a nice story on how cultural cognition shapes perceptions of climate change:   

"Basically the reason that people react in a close-minded way to information is that the implications of it threaten their values," says Dan Kahan, a law professor at Yale University and a member of The Cultural Cognition Project. 

Have a listen!


NYT Sunday Magazine Calls for End to Cognitive Illiberalism

We were delighted to discover that the CCP's study of the Supreme Court's decision in Scott v. Harris made it into New York Times Sunday Magazines Ninth Annual Year in Ideas (standard for selection: "the most clever, important, silly and just plain weird innovations..."). It was especially fitting to share that honor with the Ruppy, the glow-in-the-dark dog, public fears of whom are being investigated in CCP's synthetic biology risk perception project.