Harvard Foreword on motivated cogniton & constitutional law is now published. Basic argument is that the same interplay of cognitive & political dynamics that polarize Americans over climate change & other risk issues polarize them over the neutrality of the Supreme Court. Judges need help from communication science just as much as scientists do (although at least some Justices bear more responsibility for the communication problem in law than any scientist I can think of does for the one in public deliberations over risk regulation). There are two very thoughtful replies, one by Mark Tushnet & the other by Suzzana Sherry. I'll have to think their arguments over & see whether & how my position changes.
A friend asked me if I could supply him with graphic representations of data that illustrate the bimodal-- i.e., culturally polarized -- state of risk perceptions over climate change & contrast that distribution with a "normal" -- nonpolarized -- one on some other risk or issue. So I put together this:
The bottom histogram is the bimodal cultural distribution for perceptions of climate change risks. The top histogram is the normal distribution for nanotechnology risk perceptions. I selected nanotechnology as the comparison case not only because perceptions of its risk are not polarized but also because there is nothing that guarantees that they will stay that way. Indeed, in our study Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009), we used nanotechnology risk perceptions to test the hypothesis that that cultural predispositions can induce biased assimilation & polarization when people are exposed to information about a novel risk, one about which they had little if any prior knowledge and on which they were not polarized prior to information exposure:
(1) the top histogram is picture of a (deliberatively) "healthy" distribution of risk perceptions;
(2) the bottom histogram is a picture of a "pathological" one; and
(3) among the goals of the science of science communication should be to learn to identify risk sources that are vulnerable to becoming infected with this pathology -- as nanotechnology evidently is -- and to perfect techniques for building up their resistance to it (techniques for treating pathologies is critical too-- but it is a lot harder, I think, to change polarizing meanings than it is to stifle their formation).
I definitely agree that President Obama should be taking the lead to improve public comprehension of climate change science. But I suspect I have a very different opinion on what the President should be trying to communicate—also how and when. What the public needs, in my view, is not more information about climate change, but a new, more inclusive set of cultural idioms for discussing this issue.
- First, public controversy is strongly associated with differences in cultural or group values. People who subscribe to an individualistic, pro-market worldview tend to see climate change risks as small, while people who subscribe to an egalitarian, wealth-redistributive worldview tend to see them as large.
- Second, differences in science literacy (how knowledgeable people are about basic science) and numeracy (a measure of their facility with quantitative, technical reasoning) magnify cultural polarization. As egalitarians become more scientifically literate and numerate, their concerns grow even larger; as individualists become more scientifically literate and numerate, their concerns diminish all the more. (For this reason, levels of science literacy and numeracy have essentially no meaningful impact overall).
These data suggest that conflict over climate change, far from reflecting a deficit in public comprehension of scientific information, demonstrates how adept people are in forming beliefs that express their group commitments. Should that surprise anyone? Right or wrong, the risk perceptions of an ordinary individual won’t actually affect the climate: the contribution an individual makes to carbon emission levels by her personal behavior as a consumer, or to climate change policymaking by her personal behavior as a voter, is just too small to matter. If, however, an individual (whether a university professor in Massachusetts or an oil-rig worker in Oklahoma) forms a belief about climate change that is heretical within her community, she might well forfeit the friendship and respect of people she depends on most for support in her everyday life.
Because it’s in the rational interests of ordinary people to conform their beliefs to those that predominate in their cultural groups, it’s also not surprising that science literacy and numeracy magnify cultural polarization. People who know more about science and have a greater facility with technical reasoning can use those skills to find even more evidence that supports their culturally congenial beliefs.
Of course, if we all follow this strategy of belief formation simultaneously, the collective outcome could be a disaster. I’m not hurt when I adopt a belief that “fits” my values but that is wrong, as a matter of scientific fact; but I and many others might well suffer harm if society adopts policies that don’t reflect the best available science about consequential societal risks. Because we live in a democracy, moreover, the risk that society will fail to adopt scientifically enlightened policies goes up as individuals of diverse cultural affiliations form the impression that it is in their expressive interest to adopt beliefs that affirm their groups’ values over their rivals’.
So back to President Obama and his role in the climate change debate. I think it is one of his
Administration’s responsibilities to foster a science communication environment that spares us from these sorts of tragic conflicts between individual expressive interests and collective welfare ones.
When our leaders talk about risk, they convey information not only about what the scientific facts are but also what it means, culturally, to take stances on those facts. They must therefore take the utmost care to avoid framing issues in a manner that creates the sort of toxic deliberative environment in which citizens perceive that the positions they adopt are tests of loyalty to one or another side in a contest for cultural dominance.
Where, as is true in the global warming debate, citizens find themselves choking in a climate already polluted with such resonances, then leaders and public spirited citizens must strive to clean things up—by creating an alternative set of cultural meanings that don’t variously affirm and threaten different groups’ identities.
In that sort of environment, we can rely on the trust in science and scientists common to the overwhelming majority of cultural communities in our society to guide citizens toward acceptance of the best available science—much as it has on myriad other issues so numerous, so mundane (“take penicillin for strep throat”; “use a GPS system to keep from getting lost”) that they are essentially taken for granted.
In his Rolling Stone essay, Al Gore calls the debate over climate change a “a struggle for the soul of America.” He’s right; but that’s exactly the problem. In “battles” over “souls,” citizens of a diverse, pluralistic society will naturally disagree—intensely. We’d all be better off if the issue had never come to bear connotations so fraught. Obama’s primary science communication task now is to lower the stakes.
It won’t be easy. But any progress will depend indispensably on respecting the separation of science communication from soulcraft.
President Obama, at least, seems to actually get that.
I was one of many, many experts contributing to briefs to the Supreme Court on this case. In a 5-4 decision, the Court upheld a decision to require California to reduce the number of prisoners to a number that the state itself deemed safe for inmates. Part of the Supreme Court's calculus involved weighing potential risks and benefits to public safety involved. The majority cited expert testimony (based on numerous studies) that lowering prison populations may, on net, enhance public safety.
Like seemingly every other major cultural flashpoint (guns, the death penalty, and even abortion), both sides of the immigration debate have seized on anti-crime arguments. No one in the mainstream debate disputes that immigrants, on average, are less likely to commit crimes than native-born citizens, but I doubt that is very convincing to supporters of the new immigration law. There have also been several high-profile crimes committed by immigrants in Arizona, though I doubt those have swayed opponents of the new law. I suspect that, as with other debates about the sources of crime, the evidence is culturally loaded enough to make it hard for anyone who feels passionately about the issue to process contrary information. On the bright side -- and unlike gun control, capital punishment and abortion law -- nearly everyone agrees that immigration reform is needed. There also used to be a number of Republicans like McCain who campaigned on the issues. There's no predicting how the issue will play out this round, but I doubt that arguments about crime are unlikely to be decisive. It does, however, provide a rich field for anyone interested in doing empirical research into the way cultural cognition shapes receptivity to arguments and information about immigration!
The NYT has an interesting op ed by Charles M. Blow today. What I find most interesting isn't the notion that opposition to abortion is waxing, but the way this appears to be tied to attitudes about the Supreme Court. Here's a little clip from the side graph to the article.
Basically public perception appears to have reversed course after Obama was elected, with more Americans thinking that the Court is more liberal now that Obama has been elected and Sotomayor appointed. While there are some interesting theories about justices trending liberal over their tenures, I suspect that more obsessive SCOTUS watchers would, whether they are happy or upset by it, say that the Court has either maintained its ideological balance or trended conservative in recent years.
Why does public perception data seems to trend the other way? It's a small change, to be sure, but I wonder if perceptions about the Court aren't the product cultural cognition. If so, then it would make sense that people who think of the country as a whole as becoming more liberal under Obama as thinking that the Court, too, has become more liberal. As a cultural touchpoint, it would be disconcerting to people at both ends of the ideological spectrum to think that Obama has had no impact on the ideology of the Court or -- even more disconcerting for ideologues on both sides -- that the Court may trend conservative in his administration. Just a theory for now -- we'd need more data to test it.
Christopher Joyce has a nice story on how cultural cognition shapes perceptions of climate change:
"Basically the reason that people react in a close-minded way to information is that the implications of it threaten their values," says Dan Kahan, a law professor at Yale University and a member of The Cultural Cognition Project.
Have a listen!
We were delighted to discover that the CCP's study of the Supreme Court's decision in Scott v. Harris made it into New York Times Sunday Magazines Ninth Annual Year in Ideas (standard for selection: "the most clever, important, silly and just plain weird innovations..."). It was especially fitting to share that honor with the Ruppy, the glow-in-the-dark dog, public fears of whom are being investigated in CCP's synthetic biology risk perception project.
The New York Times is reporting on a Supreme Court case about a cross erected in the Mojave National Preserve in 1943. While most of the Court seemed focused on whether the attempt to transfer the land to a private party (and thus avoid establishment issues) was proper, Justice Scalia went right for the establishment question:
The question of the meaning of a cross in the context of a war memorial did give rise to one heated exchange, between Justice Scalia and Peter J. Eliasberg, a lawyer for Mr. Buono with the American Civil Liberties Union Foundation of Southern California.
Mr. Eliasberg said many Jewish war veterans would not wish to be honored by “the predominant symbol of Christianity,” one that “signifies that Jesus is the son of God and died to redeem mankind for our sins.”
Justice Scalia disagreed, saying, “The cross is the most common symbol of the resting place of the dead.”
“What would you have them erect?” Justice Scalia asked. “Some conglomerate of a cross, a Star of David and, you know, a Muslim half moon and star?”
Mr. Eliasberg said he had visited Jewish cemeteries. “There is never a cross on the tombstone of a Jew,” he said, to laughter in the courtroom.
Justice Scalia grew visibly angry. “I don’t think you can leap from that to the conclusion that the only war dead that that cross honors are the Christian war dead,” he said. “I think that’s an outrageous conclusion.”
Stephen Burbank (in an email) points out that this has all the markers of cognitive illiberalism as described by our article in the Harvard Law Review on the Supreme Court's decision in Scott v. Harris:
Because they are not generally aware of their own disposition to form factual beliefs that cohere with their cultural commitments [judges] manifest little uncertainty about their answers to [policy questions turning on issues of disputed fact]. But much worse, because they can see full well the influence that cultural predispositions have on those who disagree with them, participants in policy debates often adopt a dismissive and even contemptuous posture towards their opponents' beliefs....
It may be cognitively difficult for someone with the cultural commitments of Justice Scalia to understand the cross as anything other than a universal symbol profound respect, and to struggle with evidence to the contrary. But struggling with cultural blindspots is something we expect judges to do, particularly in cases involving questions about the establishment clause.
The Chicago Sun-Times and just about every other news source in the country is reporting the Supreme Court decision to hear a challenge to the city of Chicago's ordinance barring handgun ownership (McDonald v. Chicago, No. 08-1521). The debate over the ordinance and the case is ostensibly one about rights, but those rights are, as the majority opinion in Heller indicated, to be balanced with concerns about public safety. Just what public safety requires, though, is precisely what cultural cognition predicts people will disagree over. And, sure enough, as the headline in the Chicago Sun Times (surely intended to generate outrage and rejoicing in different communities) states: "Gun advocates predict drop in crime if gun ban is lifted." McDonald, Heller, and their progeny may strike a compromise that appeals to a broad spectrum of the American public, or they may inflame cultural passions further. Only time will tell. But in the meantime, you can read up on the debate and the role cultural cognition plays in it here:
- Overcoming the Fear of Cultural Politics: Constructing a Better Gun Debate
- More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions
- Beyond the Gun Fight: The Aftermath of the Virginia Tech Massacre
- Modeling Facts, Culture and Cognition in the Gun Debate
- Gun Litigation: A Cultural Critique
It’s not clear that the case will ever make it to trial, but if it does, what sort of person would make the best juror for Pittsburgh Steelers quarterback “Big Ben” Roethlisberger in his defense to the civil sexual assault case filed against him? The answer might come as a surprise -- maybe not to Roethlisberger’s lawyers, but probably to many commentators involved in the debate over law and date rape. A study founded on the theory of cultural cognition suggests that Big Ben would likely be judged much more sympathetically by a jury dominated by women who subscribe to highly traditional gender norms than he would by one consisting literally of his “peers.”
Steve Easterbrook has a thoughtful post on his blog about cultural cognition and climate change. In the comments on his post, one of his readers describe a common problem that lay citizens often ascribe to experts: sometimes experts "lie for hire." It's true that members of the public may worry about that, but it's also true that individuals selectively attend to this particular risk when evaluating expert opinion. Dan is in the middle of a study in which he demonstrates precisely this phenomenon, and I'm sure he'll be posting about it here soon!
Dan's post suggests to me that cultural cognition might influence how reassuring individuals find a recent press release by Robotic Technology, Inc., setting the record straight. While the company's Energetically Autonomous Tactical Robot ("EATR" for short), "can find, ingest, and extract energy from biomass in the environment (and other organically-based energy sources)," RTI wants you to know that, "[d]espite the far-reaching reports that this includes “human bodies,” the public can be assured that the engine Cyclone (Cyclone Power Technologies Inc.) has developed to power the EATR runs on fuel no scarier than twigs, grass clippings and wood chips -- small, plant-based items for which RTI’s robotic technology is designed to forage. Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI."
(Thanks to Sarah Lawsky for the link.)
Story today in NY Times on growing concern about the risks posed by artificial intelligence and in particular the possibility that artificially intelligent systems (including ones designed to kill people) will become autonomous. Interesting to consider how this one might play out in cultural terms. Individualism should incline people toward low risk perception, of course. But hierarchy & egalitarianism could go either way, depending on the meanings that AI becomes invested with: if applications are primarily commercial and defense-related and the technology gets lumped in w/ nanotechnology, nuclear, etc., then egalitarians will likely be fearful, and hierarchs not; if AI starts to look like "creation of life" -- akin to synbio -- then expect hierarchs to resist, particularly ones who are highly religious in nature. Wisely, AI stakeholders -- like nanotech & synbio ones-- recognize that the time is *now* to sort out what the likely risk perceptions will be so that they can be managed and steered in way that doesn't distort informed public deliberation:
The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.
"If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, 'then it is very difficult. It’s too complex, and people talk right past each other."
This is a topic ripe for investigation by cultural theorists of risk.
Quite often we'll be developing simulations based on models in which the DV is a 6-point Likert-style response scale. (Usually it's somethign like: strongly disagree / disagree / mildly disagree / mildly agree / agree / strongly agree.) For presentation purposes, it's often useful to reduce this down into two categories: any form of agreement / any form of disagreement. In particular, when graphing, it is much easier to show one cut with confidence intervals than to show five cuts with confidence intervals.
In the past we've done this by converting the DV into a binary variable, then running a logistic regression. But this has numerous drawbacks. First and foremost, it simply throws away all the information about how strongly a person agrees or disagrees. As a result, errors tend to be larger than necessary. Second, and relatedly, the results often aren't as similar to the ologit regressions run against the more information-rich likert DV as one would like. And third, if we want to report both kinds of findings -- binary and likert-style -- this means reporting two separate models that don't always give the same results. In short, it's been a mess, and we've usually just chosen one or the other. But when we've gone with a logit regression, this seems like sad choice to make just to achieve greater simplicity of presentation.
Recently, though, I had coffee with Jeff Lax-- of state-level policy analysis & Gelman Blog fame -- and he suggested something that, in retrospect, reveals that I'm still often trapped in a non-simulation mindset. In essence, he suggested this: "Run simulations on your ologit model & combine the simulations for the agree levels and again for the disagree levels; then take your confidence intervals from those combined simulations." In retrospect, that is so clearly the correct approach that the question is why I didn't see it myself. The answer, I think, is that I was still thinking in terms of the regression model rather than the simulations.
In a recent NY Times Article, Charles Siebert writes about the recent case brought by the National Resources Defense Council, arguing that Naval use of sonar in certain exercises is leading whales to flee to the surface too quickly, suffer the bends, and eventually die:
The question of sonar’s catastrophic effects on whales even reached the Supreme Court last November, in a case pitting the United States Navy against the Natural Resources Defense Council. The council, along with other environmental groups, had secured two landmark victories in the district and appellate courts of California, which ruled to heavily restrict the Navy’s use of sonar devices in its training exercises. The Supreme Court, however, in a 6-to-3 decision widely viewed as a setback for the environmental movement, overturned parts of the lower-court rulings, faulting them for, in the words of Chief Justice John Roberts’s majority opinion, failing “properly to defer to senior Navy officers’ specific, predictive judgments,” thereby jeopardizing the safety of the fleet and sacrificing the public’s interest in military preparedness by “forcing the Navy to deploy an inadequately trained antisubmarine force.” In his decision, Roberts went on to minimize, in a fairly dismissive tone, the issue of harm to marine life: “For the plaintiffs, the most serious possible injury would be harm to an unknown number of the marine animals that they study and observe.”
At the core of the dispute is how serious and plausible the anticipated harm to whales is and how serious and plausible the harm to the Navy would be if it were enjoined from conducting these exercises. While we haven't done any empirical research on this subject in particular, perceptions of risk related to military endeavors tend to be positively correlated with measures of egalitarianism, as are perceptions of environmental risk in general. The combination of environmental risk and military action in this issue makes it a twofer in terms of the cultural cognition of risk.
To kick off our new blog on our new website, what could be better than a little NY Times coverage? Ben Weber has a nice piece about Sonia Sotomayor's nomination. They mention the Harvard Law Review piece covering Scott v. Harris, but there's another piece or two on judicial cognition that you might be interested in.