Did talk at this event, which was sponsored by University of Minnesota political science department. Here are the slides (see below for summary of what I was planning to & then did end up saying). My fellow panelists, Brendan Nyhan and Dhvan Shah, gave great talks, as did U of M's faculty commenter Paul Goren, who previewed some work he has been doing on the basic policy-choice competence of citizens who are low in political knowledge, as that concept is understood & measured in political science. It was clear that the political psychology program there, which consisists in scholars from political science, communication & psychology, is radiating insight and passion.
I've been invited by the University of Minnesota political science department to make a presentation on the "political psychology of misinformation." Am mulling over what to say (have till 2:00 pm tomorrow, so no rush) & was thinking something along the lines of
- misinformation isn't really much of a problem unless antagonistic cultural meanings have become attached to an empirical claim about some fact that admits of scientific investigation;
- when such meanings have taken root, accurate information won't by itself do much good; and
- therefore the kind of misinformation to worry about is public advocacy that needlessly ties policy-relevant factual issues to antagonistic cultural meanings.
Climate change is the obvious example of 3: hierarchical-individualist activists warn that concerns over it are a smoke screen to conceal a plot to overthrow capitalism, while egalitarian-communitarian ones profer climate change as evidence of the destructiveness of capitalist greed that necessitates severe restrictions on technology & markets. The positions are reciprocal -- by supplying vivid examples of exactly the the mindset the other fears, each one actually advances the other's cause at the same time that it advances its own.
But nanotechnology risk concern furnishes an even nicer example, I think. It is, of course, sensible to investigate whether nanotechnology is hazardous, but at this point at least there's no meaningful scientific evidence that it is. Yet that hasn't stopped some advocacy groups from noisly clanging the alarm bells. Indeed, one sponsored a contest for the "best nano-free zone" symbol, with the winner to emblazoned on t-shirts, bumper stickers, etc. The contest drew some 482 entrants.
Eighty Percent of the public hasn't even heard of nanotechnology yet. This is a great way to make sure that their first exposure connects nanotchnolgoy up with politicized issues like climate change and nuclear power. This strategy for creating cultural polarization, CCP found in an experimental study, has an excellent chance to succeed. Good to think ahead, too, since eventually climate change, like nuclear, might lose its power to divide -- and then who would need the "public interest" groups dedicated to protecting us from trying to the prospect that our cultural enemies will erect their worldview into a political orthodoxy?!
This might not be "misinformation" in the sense that the symposium sponsors have in mind -- but it is the sort of behavior that makes the public receptive to misinformation and impervious to sound science. It is a toxin, really, in the communication environment that democracies depend on for reliable transmission of scientific knowledge to their citizens.
Had chance to look closely at the fascinating paper Elite Influence on Public Opinion in an Informed Electorate, American Political Science Review 105, 496-515 (2011) by my colleague John Bullock over in the Yale political science dep't.
The principal finding of the studies reported on in the article is that members of the public who identify themsleves as Demcorats and Republicans (it is important to recognize that 30% or so do not; they are independents or others) are guided less by partisan cues (in the form of the positions of elite with recognizable partisan identities) than they are by policy substance when considering new policy proposals. This is contrary the usual account of mass opinion found in political science.
But to me, at least, the most interesting finding was one relating to "need for cognition" (NFC), a measure of the individual dispositon to engage in open-minded and effortful engagement with information. The idea that partisan cues guide opinion predicts that cues will be even more important for low NFC individuals, who tend to use heuristic reasoning (System 1 in Kahneman terms), than than they are for high NFC ones, who can be expected to use systematic reasoning (Kahneman's system 2). Bullock found this pattern in Democrats -- that is, the ones who were high in NFC paid even more attention to policy content and less to cues than Democrats who were low in NFC. But he found the opposite for Republicans: ones who were high in NFC paid more attention to cues and less to policy content. This was totally unexpected by Bullock, who, in line with his hypothesis that reliance on cues was overstated, expected NFC not to matter very much (it didn't at all, but only if one ignored the interaction with party).
What sort of (admittedly post hoc) interpretation might we place on this finding? Some might see it as supporting the position that ideologically motivated reasoning is more characteristic of conservatives than liberals. John Jost advances this argument in many papers, and Chris Mooney apparently argues for it in his forthcoming book, which I'm eager to read. Democrats, on this view, are thinking things through, Republicans reflexively adhering to ideological cues.
I don't find the "motivated reasoning asymmetry thesis" convincing. It seems to me that the balance of the evidence on politically motivated reasoning (including our own work on cultural cognition; see, e.g. "Saw a Protest") suggests that the tendency to fit perceptions of fact to one's ideological predispositions is pretty much uniform across the political spectrum (or in our work, cultural spectra).
Bullock's finding -- as truly fascinating as it is -- is in fact ambiguous in this regard. It does seem that high NFC Democrats are paying more attention to information content than high NFC Republicans, who are focusing instead on cues. But it is question begging (or in the case of the asymmetry thesis, conclusion assuming) to think that Republicans are thus displaying motivated reasoning. Indeed, since the ones in question are high in NFC, why imagine that the Republican study subjects are processing information heuristically--or unconsciously fitting their positions to cues or anything else--when they go with the partisan elite's position? It is possible that both the high NFC Democrats and the high NFC Republicans are both using systematic (conscious, high-effort information processing) -- but for different ends. Democrats might be interested in trying to figure out what information fits their values best, in which case those with high NFC would turn their attention to information content rather than being guided (consciously or unconsciously) by partisan cues. Republicans, in contrast, might value taking the position that expressed their identity or advances their group ends more, in which case those high in NFC would consciously view the position of party elites as the more important piece of information.
It is true that Republicans would be "more partisan" on this account (one could also say Democrats are more "ideological" in some sense -- that is, more focused on advancing their values than on promoting the cause of their party). Maybe some would think that is an unattractive thing (I'm not sure; I think ideological zealotry can also be worrying in many contexts).
But the point is that one could not, on this account, say Republicans are more prone to motivated reasoning. We can't say because we don't know what they (or the Democrats) are trying to get out of the information here.
This point generalizes: it is impossible to say anything about the quality of cognition that individuals display unless one knows what they are trying to accomplish. Too often in psychology, individuals who are using heuristic processing or even motivated systematic reasoning are viewed as irrational when in fact those forms of information processing are reliably advancing their interest in adopting stances that express their group identities. This is the main point of our paper on the "tragedy of the risk perceptions commons" and political conflict over climate change.
In any case, I hope Bullock is motivated (consciously or otherwise) to investigate further.
The panel was lots of fun & the other panelists — including USA Today’s excellent science reporter Dan Vergano, ocean scientist and marine sexologist Ellen Prager, and Molly Bentley of the Big Picture Science show — gave great talks & were really interesting to talk to. It was also an amazing honor to be involved in an AGU-sponsored event.
I’ve been asked to be part of an NAS working group that will develop a proposal on how science should figure in the training of lawyers. I’m going to put together a memo that outlines my own initial views and distribute it shortly before the first meeting (in mid January). Below is a condensed account of the points and themes that my memo will stress. But my ideas are provisional & formative; indeed, I share them to invite your reactions, which I expect to stimulate and educate my own thinking.
I welcome feedback not only on the substance but also on what to include in an annotated bibliography, the germ of which appears after the narrative section. The bibliography is not meant as a syllabus for a course; some of the items would no doubt be assigned in the sort of “forensic science literacy” course I am describing, but mainly I am trying to compile sources that help make the spirit & philosophy of such an offering more vivid for memo readers.
Feel free to respond via email to me (firstname.lastname@example.org).
A. General Points
1. What the aim should be—and what it shouldn’t
The 2009 NAS Forensic Science Report did more than identify various forms of proof that lack scientific validity. It also demonstrated that the U.S. legal system is suffused with a basic incomprehension of the fundamentals of sound science. The prospect that this deficit would continue to make the law receptive to specious forms of scientific evidence and unreceptive to valid ones motivated the Report’s core recommendation that the Nation’s universities be made instruments for bringing the “culture of science to law.”
Spelling out what law schools should be expected to contribute to this project is, in my view, the proper focus of the working group’s attention. Lawyers don’t need to be trained to do science but they can and should be taught to recognize what constitutes sound forensic science and what doesn’t. A model course should instruct students in the general concepts and procedures that one must understand in order to perform this recognition task reliably, including principles of validity; elements of probability; and methods of inquiry (more on these below). The goal should be to create an intellectual foundation broad and stable enough to support understanding of any particular type of legally relevant scientific material.
The aim of the working group should not be to try to compile a list of important current or future types of forensic science (e.g., fingerprints or neuroscience) or specific areas of study relating to the forensic process (e.g., reliability of witness identification or the pervasiveness of cognitive biases). These are matters that one would certainly imagine as the focus of either a more comprehensive or more advanced course in law and science, and certainly the greater the number of offerings law schools provide on law and science, the better. But the most critical objective is to identify the core offering (or core curricular content) that every programmust include.
By confining its focus to what is in fact essential, the proposal will underscore the theme that U.S. law schools must treat imparting forensic science literacy as an essential part of their curricula. Lawyers and judges who possess basic forensic science literacy can be expected to handle competently whatever particular forms of scientific proof they must deal with; ones who lack this capacity cannot be expected to handle any well.
2. Principles of validity
Here I have in mind the concepts essential to systematic evaluation of the soundness of any general form of scientific inquiry or any particular application of it. These include validity proper: do the methods and design employed genuinely support the inferences that the researcher seeks to draw (internal validity), and from those can one draw reasonable inferences about the real-world phenomena that are being modeled by the study (external validity)? Are the measures employed reliable: do they generate consistent results, and do results agree across trials and researchers? The topic of causal inference is also usefully considered together with these issues, as is the concept of hypothesis testing.
The goal is to make students acquainted with the sorts of criteria that those who reliably distinguish sound from unsound science use for that purpose. I doubt that forensic science literacy as a reliable capacity to recognize sound and unsound forms of science as applied to law can be reduced to any sort of checklist of do’s & don’ts, rights & wrongs. But the elaborated development of a set of criteria for “valid” forensic science is likely a sensible way, pedagogically speaking, to conjure the sort of atmosphere in which such a capacity can be acquired and refined.
Such instruction can easily be illustrated with legal examples because these are exactly the sorts of considerations an incomprehension of which is reflected in the practice of forensic science that the 2009 Report criticizes.
3. Elements of probability
Concepts of probability animate the methods and testing strategies of science (and ultimately the philosophy of competing conceptions of scientific understanding, although that’s a depth the forensic- science-literate lawyer needn’t reach unless he or she is drawn there by curiosity). But, again, forensic- science-literate lawyers don’t need to be trained to do sound science, only to recognize it. For this purpose, it is sufficient for them to be attain, first, a conceptual grasp of the basic elements of probability (e.g., normal distributions and standard deviation; nonnormal distributions, such as “survival” curves; measurement error, sampling error, and estimation; p-values and confidence intervals; Bayes’s Theorem and Bayesian inference) and, second, enough fluency with statistics to be able to read and comprehend the terms in which empirical results are ordinarily reported. They should also be made familiar with those characteristic shortcomings of unsound science that consist in an absence of genuine comprehension, as opposed to mechanical application, of statistical procedures. Once more, the law is filled with practical illustrations.
4. Methods of inquiry
The idea here would be to make students familiar with the conventional sorts of methods that will inform the sorts of empirical work they are likely to encounter as lawyers. These include, at a high level of generality, observational vs. experimental approaches; but at a more particular level, it would be useful, too, to supply students with the materials necessary to enable informed and critical reflection on specific methods that bear on important, domain-specific matters of inquiry (e.g., clinical trials and “blinded” experimental methods, “laboratory” vs. “field” experimentation; multivariate regression vs. “matching” for observational studies). Such instruction can usefully be guided by the objective of making prospective lawyers familiar with the characteristic limitations of studies that employ one or another method—ones associated not just in the misapplication or inappropriate uses of one or another method but also ones with the inherent imperfection of all testing strategies.
Of course lawyers should also be taught that precisely because all methods are imperfect, it is a mistake—a popular misconception that reflects science illiteracy— to equate scientific validity with the conclusive or final resolution of an issue, or even with proof that in itself satisfies any particular legal standard such as “beyond a reasonable doubt.” No more is or can be expected of forensic proof than that it supply a decisionmaker with more evidence for believing (or disbelieving) a proposition than she otherwise would have had (and of course forms that supply anything less than that should not be tolerated).
B. Annotated bibliography
Useful sources. Possible course materials but mainly sources that illustrate or reflect the points above
1. Principles of validity
National Research Council (U.S.). Committee on Identifying the Needs of the Forensic Science Community., National Research Council (U.S.). Committee on Science Technology and Law Policy and Global Affairs. and National Research Council (U.S.). Committee on Applied and Theoretical Statistics. Strengthening Forensic Science in the United States: A Path Forward, (National Academies Press, Washington, D.C., 2009) —relevant for all really
Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) (suggesting that principles of validity should be normative for evaluation of admissibility of expert proof)
Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999) (just kidding!)
United States v. Llera Plaza, 179 F. Supp. 2d 492 (E.D. Pa., Jan. 7, 2002) (holding on basis of brilliant application of the principles of validity that fingerprints are not and hence fingerprint experts should not be permitted to give conclusions on “matching” prints)
United States v. Llera Plaza, 188 F. Supp. 2d 549, 576 (E.D. Pa., March 3, 2002) (oops, nevermind!)
Curious what people would recommend here. Is there something for understanding of basic concepts of scientific validity that is as accessible and compact as say Abelson’s Statistics as Principled Argument, below?
2. Elements of probability
Finkelstein, M.O. and Fairley, W.B. A Bayesian Approach to Identification Evidence. Harvard Law Review 83, 489-517 (1970).
Finkelstein, M.O. Basic concepts of probability and statistics in the law, (Springer, New York, 2009).
Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011) (recognizing that significance testing for scientific studies is not a criterion of practical significance for causal inferences relating to law)
Abelson, R.P. Statistics as principled argument, (L. Erlbaum Associates, Hillsdale, N.J., 1995).
Gigerenzer, G. Calculated Risks: How to Know When Numbers Deceive You (Simon and Schuster, New York, 2002).
Motulsky, H. Intuitive biostatistics : a nonmathematical guide to statistical thinking, (Oxford University Press, New York, 2010).
3. Methods of inquiry
Fisher, F.M. Multiple Regression in Legal Proceedings. Colum. L. Rev. 80, 702-736 (1980).
Again, eager for suggestions here. There are lots of good “handbooks” for social science methods; but is there something that is more general, yet accessible and compact (again, compare Abelson)
The answer is neither. Education level has a correlation pretty close to zero (r = -0.02, p = 0.11) with climate change risk perceptions.
I measured the assocation using the data from a nationally representative sample of approximately 1,500 Americans.
The data were collected by the Cultural Cognition Project as part of an ongoing study of science literacy, numeracy, & risk perception. In results that we describe in a working paper, science literacy and numeracy also have very minimal impact on perceptions of climate change -- assessed independently of cultural worldviews. Once cultural worldviews are taken into account, then the impact of science literacy & numeracy on climate change risk perceptions depends on peoples' cultural orientations: as they get more science literate & numerate, egalitarian communitarians see more risk, but hierarchical individualists even less.
Or in other words, enhanced science literacy & numeracy are associated not with convergence on any particular view (supported by science or otherwise) but with greater cultural polarization.
Now education level, in contrast, is not associated with greater climate change polarization. If you want to fit your perceptions of risk to your values, you need to do more than go to college. You have to study really hard in math & science!
Actually, I'm sounding much more cynical here than I mean to. As we discuss in the paper, this pathology isn't intractable -- but if one doesn't even know that cultural polarization increases as science literacy does or why, then the problem is unlikely to go away.
Some smart researcher should invent a market measure of belief in climate change.
An index could be constructed that reflects things like investements in climate change adaptation, investments in new business opportunities created by change in climate, and offering of & changes in price for insurance against adverse impacts.
As the price of the index rises (or falls!) it would be evidence of market consensus on climate change. Market pricing is relevant to trying to figure out lots of things, obviously (indeed, there is a cottage industry in academia now to create prediction markets to compete with other types of predictive models). But the value of a market consensus measure here is cultural, too: for some citizens, market consensus will have a positive cultural resonance that scientific consensus (at least in this context) lacks. They could be expected, then, to give information from the market more engaged & open-minded attention.
People culturally disinclined to pay attention to markets as information might pay more attention to this one, too, and thus learn about the value of being more open-minded about information sources.
Last but not least, having a market measure of belief in climate change would be great for people trying to investigate dynamics of science communication -- for all the reasons I just gave.
Does “pepper spray” really hurt? The answer probably depends on the relationship between the ideology of the person who was sprayed and the ideology of the person asking/answering the question.
There is an internet buzz emerging over the suggestion by Fox news commentators & equivalent that “pepper” spray (it’s orders of magnitude more irritating than habanero) isn't all that painful. The debate is politically polarized along predictable lines.
If the demonstrators who were sprayed had been protesting abortion rights outside an abortion clinic, would there be an ideological inversion of the perceptions of how much the spray stings?
The answer is that we are unlikely even to get to that point in the discussion before we are already tied in knots over other facts relating to the behavior of the protesters and the police.
My colleagues at the Cultural Cognition Project and I did a study in which we instructed subjects to view a videotape of a protest that (we said) was broken up by the police to determnine if the protestors had crossed the line between “speech” & intimidation. Our subjects said "yes" or "no" -- said they saw shoving, blocking or only exhorting, persuading -- depending on the subjects' own values & what we told them the protest was about & where it was taking place: an anti-abortion demonstration outside an abortion clinic; or an anti- don't/ask/don't/tell protest outside a college recruitment center.
This is an example of “cultural cognition,” the tendency of people to conform their view of legally relevant facts to their group values. It’s a big problem for law — not just because these dynamics could affect juries & judges but also because they generate divisive conflict over the political neutrality of the law. I wrote a long law review article about this problem recently but I admit (as I did there) that I don’t think there is any easy solution to it.
But here is one thing concerned citizens might do to try to counteract this dynamic. When they see something unjust like UC Davis incident, try to look & find out if the same injustice has been perpetrated against others whose political views are different from one's own -- & complain about both.
I looked for stories on abortion protesters being "pepper" sprayed. Found some, but not many. Either anti-abortion protesters don't get sprayed as often (in absolute terms) as Occupy Wall Street & anti-war protesters or the spraying doesn't get reported as often, perhaps because of the impact of cultural cognition in reporting of news (the facts that get reported are the ones we are predisposed to believe) . . . .
reposted from Balkinization
gave a talk last night at Harvard Law School in connection with the Supreme Court Foreword. Below is an *outline* of points I made. It is *not* text of my talk; I spoke extemporaneously & merely used the outline as something to think about as I thought about what to say in afternoon. (Maybe I'll try to remember what I said--was not nearly so dense as this-- & write it down, but I doubt it!) "Plata's Republic" is play on case Brown v. Plata in which Scalia's dissent looks motivated reasoning in the eye & proclaims it the truth of the role of empirical claims in democratic policy deliberations (I think the most surprising thing I've ever seen in U.S. Reports).
1. My basic claim is that political conflict over the neutrality of the Supreme Court is generated by psychological dynamics unrelated to whether the Justices are genuinely partisan or whether genuine neutrality is possible. That is, such conflict can be fully explained even assuming that neutrality is meaningful and that the Court is an acceptably neutral decisionmaker. If such conflict is undesirable—as I submit it is—then we must perfect our understanding of nature of these dynamics and of how to control them.
2. We can make sense of these dynamics by considering political conflict over policy-relevant science. Valid science does not publicly certify itself: because citizens are not in a position to reproduce scientific findings on their own, they must necessarily rely on social cues to certify for them what insights have been genuinely established through the use of valid scientific means. As a result of motivated reasoning, diverse groups of citizens will often construe those cues in opposing ways. When that happens, there will be political conflict over science notwithstanding its validity and notwithstanding the political impartiality and good faith of scientists. The existence of such conflict, moreover, will impede adoption of policies that effectively promote ends—including public health, national security, and economic propserity—that diverse citizens agree are the appropriate objects of law.
3. The dynamics that generate political conflict over the Supreme Court’s constituitional decisionmaking are exactly the same ones that generate political conflict over policy-relevant science. Just as they cannot verify the validity of science on their own, so citizens cannot verify the neutrality of constitutional decisionmaking on their own; they must rely on social cues to certify the validity of such decisionmaking. In this context, too, motivated reasoning will often drive citizens of diverse values to diverge in their assessments of what those cues mean. Politically diverese citizens will disagree about the neutrality of constitutional decisionmaking in such circumstances despite the impartial application of valid doctrinal rules for enforcing the state’s obligation to be neutral in the manner that citizens of diverse values agree it should be. Such disagreement, moreover, will itself vititiate the value of the impartial application of those doctrines insofar as the benefit of neutrality consists largely in public confidence that the law is not imposing on them obligations incompatible with respect for the freedom of diverse citizens to pursue happiness on terms of their own choosing.
4. Both of these problems—political conflict over policy-relevant science and political conflict over constitutional law—reflect communication deficits. The impediment that political conflict poses to the adoption of informed policies is the price we pay for failing to recognize that d oing valid science and communicating the validity of science are entirely different things. Likewise, some portion of the toll that political conflict over Supreme Court neutrality exacts from our experience of liberty—likely a very large portion of it—reflects our failure to recognize that doing netural decisionmaking and communicating it are entirely different things too. How to shield public policy deliberations from the recurring influences—accidental and strategic—that trigger culturally motivated reasoning with respect to both policy-relevant science and constitutional neutrality are both matters that admit of and demand scientific investigation in their own rights.
5. Developing these sciences—fixing the communication failures of Plata’s Republic—is a mission that lawyers, and the institutions that train them, are ideally situtated to address. It is a central part of the lawyer’s craft to match the content of information with the cultural cues (the social meanings) that enable its comprehension and that vouch for its credibility. Our experience with and sensitivity to this dimension of effective communication can thus help to remedy the sad and costly inattention to it reflected in public policy discourse. Moreover, because a training in law always has been and continues to be a form of preparation for the exercise of significant civic responsibility—we educate Presidents, after all, as well as Supreme Court Justices and Supreme Court advocates—it is perfectly natural that law schools should play a role in perfecting the science of science communication. It is all the more obvious that they are the natural location to address the judiciary’s own peculiar and ironic neglect of the fit between its professional conventions for doing neutral law and the cues that communicate constitutional neutrality. Not only are we ideally positioned to promote scientific inquiry into what effective neutrality communication demands; we are uniquely empowered and responsible for implementing what such investigation can teach us through the self-conscious and enlightened cultivation of our profession’s norms.
Harvard Foreword on motivated cogniton & constitutional law is now published. Basic argument is that the same interplay of cognitive & political dynamics that polarize Americans over climate change & other risk issues polarize them over the neutrality of the Supreme Court. Judges need help from communication science just as much as scientists do (although at least some Justices bear more responsibility for the communication problem in law than any scientist I can think of does for the one in public deliberations over risk regulation). There are two very thoughtful replies, one by Mark Tushnet & the other by Suzzana Sherry. I'll have to think their arguments over & see whether & how my position changes.
A friend asked me if I could supply him with graphic representations of data that illustrate the bimodal-- i.e., culturally polarized -- state of risk perceptions over climate change & contrast that distribution with a "normal" -- nonpolarized -- one on some other risk or issue. So I put together this:
The bottom histogram is the bimodal cultural distribution for perceptions of climate change risks. The top histogram is the normal distribution for nanotechnology risk perceptions. I selected nanotechnology as the comparison case not only because perceptions of its risk are not polarized but also because there is nothing that guarantees that they will stay that way. Indeed, in our study Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009), we used nanotechnology risk perceptions to test the hypothesis that that cultural predispositions can induce biased assimilation & polarization when people are exposed to information about a novel risk, one about which they had little if any prior knowledge and on which they were not polarized prior to information exposure:
(1) the top histogram is picture of a (deliberatively) "healthy" distribution of risk perceptions;
(2) the bottom histogram is a picture of a "pathological" one; and
(3) among the goals of the science of science communication should be to learn to identify risk sources that are vulnerable to becoming infected with this pathology -- as nanotechnology evidently is -- and to perfect techniques for building up their resistance to it (techniques for treating pathologies is critical too-- but it is a lot harder, I think, to change polarizing meanings than it is to stifle their formation).
I definitely agree that President Obama should be taking the lead to improve public comprehension of climate change science. But I suspect I have a very different opinion on what the President should be trying to communicate—also how and when. What the public needs, in my view, is not more information about climate change, but a new, more inclusive set of cultural idioms for discussing this issue.
- First, public controversy is strongly associated with differences in cultural or group values. People who subscribe to an individualistic, pro-market worldview tend to see climate change risks as small, while people who subscribe to an egalitarian, wealth-redistributive worldview tend to see them as large.
- Second, differences in science literacy (how knowledgeable people are about basic science) and numeracy (a measure of their facility with quantitative, technical reasoning) magnify cultural polarization. As egalitarians become more scientifically literate and numerate, their concerns grow even larger; as individualists become more scientifically literate and numerate, their concerns diminish all the more. (For this reason, levels of science literacy and numeracy have essentially no meaningful impact overall).
These data suggest that conflict over climate change, far from reflecting a deficit in public comprehension of scientific information, demonstrates how adept people are in forming beliefs that express their group commitments. Should that surprise anyone? Right or wrong, the risk perceptions of an ordinary individual won’t actually affect the climate: the contribution an individual makes to carbon emission levels by her personal behavior as a consumer, or to climate change policymaking by her personal behavior as a voter, is just too small to matter. If, however, an individual (whether a university professor in Massachusetts or an oil-rig worker in Oklahoma) forms a belief about climate change that is heretical within her community, she might well forfeit the friendship and respect of people she depends on most for support in her everyday life.
Because it’s in the rational interests of ordinary people to conform their beliefs to those that predominate in their cultural groups, it’s also not surprising that science literacy and numeracy magnify cultural polarization. People who know more about science and have a greater facility with technical reasoning can use those skills to find even more evidence that supports their culturally congenial beliefs.
Of course, if we all follow this strategy of belief formation simultaneously, the collective outcome could be a disaster. I’m not hurt when I adopt a belief that “fits” my values but that is wrong, as a matter of scientific fact; but I and many others might well suffer harm if society adopts policies that don’t reflect the best available science about consequential societal risks. Because we live in a democracy, moreover, the risk that society will fail to adopt scientifically enlightened policies goes up as individuals of diverse cultural affiliations form the impression that it is in their expressive interest to adopt beliefs that affirm their groups’ values over their rivals’.
So back to President Obama and his role in the climate change debate. I think it is one of his
Administration’s responsibilities to foster a science communication environment that spares us from these sorts of tragic conflicts between individual expressive interests and collective welfare ones.
When our leaders talk about risk, they convey information not only about what the scientific facts are but also what it means, culturally, to take stances on those facts. They must therefore take the utmost care to avoid framing issues in a manner that creates the sort of toxic deliberative environment in which citizens perceive that the positions they adopt are tests of loyalty to one or another side in a contest for cultural dominance.
Where, as is true in the global warming debate, citizens find themselves choking in a climate already polluted with such resonances, then leaders and public spirited citizens must strive to clean things up—by creating an alternative set of cultural meanings that don’t variously affirm and threaten different groups’ identities.
In that sort of environment, we can rely on the trust in science and scientists common to the overwhelming majority of cultural communities in our society to guide citizens toward acceptance of the best available science—much as it has on myriad other issues so numerous, so mundane (“take penicillin for strep throat”; “use a GPS system to keep from getting lost”) that they are essentially taken for granted.
In his Rolling Stone essay, Al Gore calls the debate over climate change a “a struggle for the soul of America.” He’s right; but that’s exactly the problem. In “battles” over “souls,” citizens of a diverse, pluralistic society will naturally disagree—intensely. We’d all be better off if the issue had never come to bear connotations so fraught. Obama’s primary science communication task now is to lower the stakes.
It won’t be easy. But any progress will depend indispensably on respecting the separation of science communication from soulcraft.
President Obama, at least, seems to actually get that.
I was one of many, many experts contributing to briefs to the Supreme Court on this case. In a 5-4 decision, the Court upheld a decision to require California to reduce the number of prisoners to a number that the state itself deemed safe for inmates. Part of the Supreme Court's calculus involved weighing potential risks and benefits to public safety involved. The majority cited expert testimony (based on numerous studies) that lowering prison populations may, on net, enhance public safety.
Like seemingly every other major cultural flashpoint (guns, the death penalty, and even abortion), both sides of the immigration debate have seized on anti-crime arguments. No one in the mainstream debate disputes that immigrants, on average, are less likely to commit crimes than native-born citizens, but I doubt that is very convincing to supporters of the new immigration law. There have also been several high-profile crimes committed by immigrants in Arizona, though I doubt those have swayed opponents of the new law. I suspect that, as with other debates about the sources of crime, the evidence is culturally loaded enough to make it hard for anyone who feels passionately about the issue to process contrary information. On the bright side -- and unlike gun control, capital punishment and abortion law -- nearly everyone agrees that immigration reform is needed. There also used to be a number of Republicans like McCain who campaigned on the issues. There's no predicting how the issue will play out this round, but I doubt that arguments about crime are unlikely to be decisive. It does, however, provide a rich field for anyone interested in doing empirical research into the way cultural cognition shapes receptivity to arguments and information about immigration!
The NYT has an interesting op ed by Charles M. Blow today. What I find most interesting isn't the notion that opposition to abortion is waxing, but the way this appears to be tied to attitudes about the Supreme Court. Here's a little clip from the side graph to the article.
Basically public perception appears to have reversed course after Obama was elected, with more Americans thinking that the Court is more liberal now that Obama has been elected and Sotomayor appointed. While there are some interesting theories about justices trending liberal over their tenures, I suspect that more obsessive SCOTUS watchers would, whether they are happy or upset by it, say that the Court has either maintained its ideological balance or trended conservative in recent years.
Why does public perception data seems to trend the other way? It's a small change, to be sure, but I wonder if perceptions about the Court aren't the product cultural cognition. If so, then it would make sense that people who think of the country as a whole as becoming more liberal under Obama as thinking that the Court, too, has become more liberal. As a cultural touchpoint, it would be disconcerting to people at both ends of the ideological spectrum to think that Obama has had no impact on the ideology of the Court or -- even more disconcerting for ideologues on both sides -- that the Court may trend conservative in his administration. Just a theory for now -- we'd need more data to test it.
Christopher Joyce has a nice story on how cultural cognition shapes perceptions of climate change:
"Basically the reason that people react in a close-minded way to information is that the implications of it threaten their values," says Dan Kahan, a law professor at Yale University and a member of The Cultural Cognition Project.
Have a listen!
We were delighted to discover that the CCP's study of the Supreme Court's decision in Scott v. Harris made it into New York Times Sunday Magazines Ninth Annual Year in Ideas (standard for selection: "the most clever, important, silly and just plain weird innovations..."). It was especially fitting to share that honor with the Ruppy, the glow-in-the-dark dog, public fears of whom are being investigated in CCP's synthetic biology risk perception project.
The New York Times is reporting on a Supreme Court case about a cross erected in the Mojave National Preserve in 1943. While most of the Court seemed focused on whether the attempt to transfer the land to a private party (and thus avoid establishment issues) was proper, Justice Scalia went right for the establishment question:
The question of the meaning of a cross in the context of a war memorial did give rise to one heated exchange, between Justice Scalia and Peter J. Eliasberg, a lawyer for Mr. Buono with the American Civil Liberties Union Foundation of Southern California.
Mr. Eliasberg said many Jewish war veterans would not wish to be honored by “the predominant symbol of Christianity,” one that “signifies that Jesus is the son of God and died to redeem mankind for our sins.”
Justice Scalia disagreed, saying, “The cross is the most common symbol of the resting place of the dead.”
“What would you have them erect?” Justice Scalia asked. “Some conglomerate of a cross, a Star of David and, you know, a Muslim half moon and star?”
Mr. Eliasberg said he had visited Jewish cemeteries. “There is never a cross on the tombstone of a Jew,” he said, to laughter in the courtroom.
Justice Scalia grew visibly angry. “I don’t think you can leap from that to the conclusion that the only war dead that that cross honors are the Christian war dead,” he said. “I think that’s an outrageous conclusion.”
Stephen Burbank (in an email) points out that this has all the markers of cognitive illiberalism as described by our article in the Harvard Law Review on the Supreme Court's decision in Scott v. Harris:
Because they are not generally aware of their own disposition to form factual beliefs that cohere with their cultural commitments [judges] manifest little uncertainty about their answers to [policy questions turning on issues of disputed fact]. But much worse, because they can see full well the influence that cultural predispositions have on those who disagree with them, participants in policy debates often adopt a dismissive and even contemptuous posture towards their opponents' beliefs....
It may be cognitively difficult for someone with the cultural commitments of Justice Scalia to understand the cross as anything other than a universal symbol profound respect, and to struggle with evidence to the contrary. But struggling with cultural blindspots is something we expect judges to do, particularly in cases involving questions about the establishment clause.
The Chicago Sun-Times and just about every other news source in the country is reporting the Supreme Court decision to hear a challenge to the city of Chicago's ordinance barring handgun ownership (McDonald v. Chicago, No. 08-1521). The debate over the ordinance and the case is ostensibly one about rights, but those rights are, as the majority opinion in Heller indicated, to be balanced with concerns about public safety. Just what public safety requires, though, is precisely what cultural cognition predicts people will disagree over. And, sure enough, as the headline in the Chicago Sun Times (surely intended to generate outrage and rejoicing in different communities) states: "Gun advocates predict drop in crime if gun ban is lifted." McDonald, Heller, and their progeny may strike a compromise that appeals to a broad spectrum of the American public, or they may inflame cultural passions further. Only time will tell. But in the meantime, you can read up on the debate and the role cultural cognition plays in it here:
- Overcoming the Fear of Cultural Politics: Constructing a Better Gun Debate
- More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions
- Beyond the Gun Fight: The Aftermath of the Virginia Tech Massacre
- Modeling Facts, Culture and Cognition in the Gun Debate
- Gun Litigation: A Cultural Critique