It's coming soon. But not before I get done learning from my class what they think. I also learned a lot from Randy Kennedy's lecture at Leslie College last week. I hope he writes up his lecture so that others can think about his reflections as well (I'm sure I'll say more about Kennedy in "part 3").
In addition to being well-crafted and informative, the posts were immensely heartening.
Written by and for people who do work relating to nuclear energy, both displayed keen awareness of the science of public risk perceptions and science communication. (Cultural cognition was featured, but was--very appropriately--not the only dynamic that was addressed.)
What's more, rather than the frustrated hand-wringing and finger-pointing that experts (and many others) often (understandably but not helpfully) display when confronted with public controversy over risk, both evinced an uncomplaining, matter-of-fact dedication to making sense of how the public makes sense of the world.
From Neutron Economy:
To summarize - providing education and facts are good, useful even - but on their own insufficient without presenting those facts in a context which engages with the deeply-held values of the audience. To produce actual engagement - and even inducement to support - requires a producing a context of facts compatible with the values of those one is trying to reach. In other words, for the case of nuclear, it means going beyond education and comparative evaluation of risk (again, to emphasize, both of which are valid in and of themselves) and placing these within the framework of how this speaks to the values of the audience....
[I]it is the job of the nuclear professionals (as members of the "technical community") to do our best to provide an accurate technical framework for these evaluations of risk by the public, such that they can make the most sound decisions on risk. Meanwhile it is the job of nuclear communicators and advocates to speak to values, as to produce more fair evaluations of both the benefits and risks of nuclear, particularly in the context of available energy choices.
So, “pure” facts don’t tend to change our minds very often. And surprisingly, presenting facts alone when encouraging a new perspective can often result in the opposite effect on people who disagree....
Which naturally leads to our next question, “If cultural influence is so strong on perceiving facts, is trying to educate people of the beneficial facts about nuclear energy hopeless?”
We agree with Steve’s answer, “Not at all.”
But the key is to frame our factual and technically accurate answers within the cultural framework understanding of those we are trying to engage.
Reading these words made me believe that it is not at all unrealistic to anticipate that the practice of science will in the not too distant future be happily and productively integrated with the science of science communication.
Is evoking emotion a means of communicating "factual information" on risk and the like? The Wittlin test
I would say "yes, so long as..." and then launch into a long, abstract account of emotion as a form of cognitive perception that is uniquely suited to apprehending the significance of information for goods a person values (see Damasio, Descartes' Error; Nussbaum, Upheavals of Thought) but that is also vulnerable to bias and hence manipulation, blah blah...
Maggie Wittlin, however, has sent me an email that convinces me there is a much simpler answer: unconditionally"yes" or unconditionally "no" depending on what the emotional appeal is about and what the cultural worldview is of the person answering the question!
Two recent cases (one argued today) seem to be asking the question: are images that cause strong emotional reactions toward the subject matter informative? Or are they mere advocacy? I think you'll get two different answers based on (1) whether you ask and egalitarian or a hierarch (serious individualists might be consistent) and (2) which case you ask about:
On the right, we have the Texas sonogram case, where CJ Edith Jones writes, "Though there may be questions at the margins, surely a photograph and description of its features constitute the purest conceivable expression of 'factual information.' If the sonogram changes a woman’s mind about whether to have an abortion -- a possibility which Gonzales says may be the effect of permissible conveyance of knowledge, Gonzales, 550 U.S. at 160, 127 S. Ct. at 1634 -- that is a function of the combination of her new knowledge and her own 'ideology' ('values' is a better term), not of any 'ideology' inherent in the information she has learned about the fetus."
On the left, we have the challenge to the FDA cigarette warning label regulations, where "Stern also argued today that smokers do not fully understand tobacco’s harmful effect on health. The images, he argued, communicate the risk of smoking more effectively than do text warnings." On the other hand, "Noel Francisco, representing R.J. Reynolds Tobacco Co. in the dispute, said the labels cross the line from fact-based to issue advocacy. The government is triggering a negative emotional reaction."
In part 1, I argued that what the Trayvon Martin case means won’t turn on what the facts are found to be.
On the contrary, what we understand the facts to be will turn on what the case means to us as members of one or another cultural group.
Public reactions to the case display the characteristic signature of cultural cognition--the tendency of people to fit the perception of legally consequential facts to their group commitments.
The influence of cultural cognition explains why people with different outlooks and identities are forming such strong and divergent understandings of what happened despite their having almost no clear evidence to go on.
And it predicts (on the basis of experimental studies) that they are likely to continue to be divided just as bitterly no matter how much evidence comes to light—even if it turns out, say, that an unobserved neighbor made a digital recording of the attack with his or her cell phone (or high-resolution camera).
But as I said in my last post, this conclusion doesn’t mean there’s no point talking about the case. We should be addressing the meanings that divide us on an issue like this, because they divide us on lots of things—not just the use of violence by individuals of one race on those of another, or even the use of it by the police against private citizens, but also matters as diverse as whether climate change is occurring or whether schools should vaccinate pre-adolescent girls against HPV.
This sort of division, in my view, is a barrier to our coming to democratic consensus on a wide variety of policies that promote our common welfare in ways perfectly compatible with our diverse cultural values.
The question, in my view, is how we might use the Trayvon Martin case as an occasion for a meaningful discussion about meanings in our political life.
In this post, I’ll identify how not to do it.
2. Replaying history: “shall issue,” “stand your ground,” and the culture of honor
It turns out that we have been “discussing” cultural meanings since pretty much the start of this affair. But we’ve been doing it in the idiom of culturally motivated empirical assertions about the impact of law.
Two laws, in particular—one relating to guns and the other to the use of self-defense.
Florida is one of the 38 states with so-called “shall issue” laws, which essentially mandate that any adult citizen who has not been convicted of a felony or diagnosed with a mental illness be issued a permit to carry a concealed firearm in public.
It is also one of a dozen or states that has recently enacted “stand your ground” laws, which provide that a person “who is attacked in any [public] place where he has a right to be has no duty to retreat” before resorting to deadly force to defend him- or herself from a potentially lethal assault. (Media reports miscalculate the number—apparently counting laws that existed before the recent spate of “stand your ground” enactments and also mixing in ones that relate to the use of deadly force in the home.)
George Zimmerman, the shooter in this case, was carrying a concealed handgun pursuant to a “shall issue” license. He also asserts that his fatal shooting of Martin—whom Zimmerman was tailing because he looked “suspicious”—was an act of self-defense.
Unsurprisingly, there has been a barrage of commentaries attributing violent assaults to “shall issue” and “stand your ground” laws, and a counter-barrage crediting these laws with reducing the incidence of violent crime.
These empirical arguments are specious. Indeed, they are part and parcel of a longstanding cultural division in our political life. Zealots who crave (or indeed profit from) such debate are exploiting the Trayvon Martin case to deepen that division—crowding out discussion of things that really matter.
a. The evidence. There is no persuasive empirical evidence that “shall issue” laws have any impact on the rate of violent crime.
Don’t take my word for it: that's the conclusion the National Academy of Sciences reached in an “expert consensus” report, which examined numerous empirical studies on the matter and concluded that it was simply impossible to say one way or another whether such laws increase crime or instead decrease it as a result of their effect in deterring violent predation.
The evidence on how “stand your ground” laws have affected violent-crime rates is no more conclusive. Indeed, it’s hard to conceive of how it could be.
These laws have all been enacted in the last decade. Yet the rule that a person can “stand his ground”—that he has no duty to retreat before using deadly force in self-defense—has been the majority rule among U.S. states for over a century. It was already the rule, in fact, in many of the states that have recently adopted “stand your ground” laws (e.g., Georgia, Indiana, Kentucky, Montana, Oklahoma, Utah, Washington, and West Virginia).
Before it enacted its “stand your ground” law, Florida apparently did make the lawful use of deadly force in self-defense conditional on a duty to avail oneself of any safe route of retreat, at least when an individual was attacked outside his or her home. But violent crime has decreased in that state over the the last decade.
Indeed, violent crime has decreased throughout the U.S. during that time. Identifying all the potential causes for this trend, and disentangling them from one another in order to determine what impact (if any) enacting or not enacting a “stand your ground” law has had on the velocity of crime abatement in any particular state, would involve overcoming all the statistical difficulties that led the National Academy of Sciences to toss its hands up in the air when it tried to measure the impact of “shall issue” laws on violent crime.
Any commentator who asserts with confidence that either “stand your ground” laws or “shall issue” laws increase or decrease crime simply doesn’t know what he or she is talking about.
b. Culture, cognition, and political opportunism. What there is persuasive empirical evidence of, however, is the biasing impact of cultural cognition on individuals’ assessments of the impact of laws like these.
Individuals with egalitarian, communitarian values—for whom the gun is a noxious symbol of patriarchy, racism, indifference to others, and hostility to reason—predictably construe the evidence as showing that lax gun control laws increase deadly violence.
In contrast, those with hierarchical and individualistic worldviews—for whom the gun is associated with positive values such as courage, self-reliance, and honor—predictably fit their perceptions of the evidence to the culturally congenial conclusion that shall issue laws decrease homicide rates.
As a result of these same dynamics, moreover, they both tend to misperceive that the weight of expert evidence is on their side.
The same cultural divisions mark reactions to the duty to retreat in self-defense laws. Indeed, the advent of the “stand your ground” movement is intimately connected to cultural conflict over guns.
As indicated, the motivation for these statutes wasn’t to change the law. On the contrary, it was to provoke culturally grounded conflict.
The biggest threat to the gun industry is not that guns will be regulated out of existence. It is that future generations of Americans, as they become progressively more removed from the cultural norms that motivate people to buy guns, will simply lose interest in owning them.
Orchestrated by the NRA, the campaign to enact “stand your ground” laws is a booster shot for those norms. By design, “stand your ground” laws radiate individualistic and hierarchical values. The enactment of them—particularly over the predictable, and predictably strident, opposition of groups associated with egalitarian and communitarian values—broadcasts the vitality of a pro-gun ethos, a signal that can be expected to inculcate the same in those who receive that signal.
c. We’ve seen this before; enough already! The cultural battle over “stand your ground” laws is actually an historical replay.
Just over a century ago, courts in the South and West adopted the “no retreat” rule. They called the “true man” doctrine, a label that recognized that a man whose character is “true” (that is, in order, or straight, like a “true beam”) appropriately values his own liberty more than the life of someone who wrongfully threatens it.
Northeastern jurists and commentators denounced this departure from the traditional “retreat to the wall” position as an expression of the “feeling which is responsible for the duel, the war, for lynching.” The echo of the Civil War reverberated through this legal debate for a period for some three decades.
Then, in one of the most brilliant demonstrations of statesmanship in the history of America jurisprudence, Justice Holmes defused this controversy by draining it of its expressive significance.
It’s futile, he reasoned in the 1921 decision of Brown v. United States, for the law to demand that someone who faces a deadly threat “pause to consider whether a reasonable man might not think it possible to fly with safety.” “Detached reflection cannot be demanded in the presence of an uplifted knife."
Just like that, the “true man doctrine” became the “scared shitless man defense.” The South and the West got the rule they wanted, but only after it had been gutted of the meaning that galled the Northeast.
Everyone lost interest, and the issue went away. Gun control essentially took its place as the front of the battle over the status of honor norms in U.S. law and culture.
But then 85 years later the NRA came to the brilliant realization that it could subsidize the culture war over guns by reviving the “true man” doctrine in the form of the new, Clint-Eastwoodesque “stand your ground” laws.
Not surprisingly, the most receptive states were located in regions of the country that already had the “true man” doctrine.
But no matter: the point wasn’t to change the law; it was to agitate and inflame.
The NRA could count on agitation, of course, only if the egalitarian communitarian opponents of the honor culture—the descendents of the “true man” critics—took the bait. Which of course, they have done. They'd be out of work too without this sort of conflict.
Hey—I didn’t know him. But I think I can safely say, “You are no Justice Holmes,” to the legions of commentators now seizing on the Trayvon Martin as an occasion to raise the volume in equally tendentious and tedious “shall issue” and “stand your ground” debates.
I’d also like to tell them to just back off. Not only are you needlessly sowing division; you are destroying the prospects for a meaningful conversation of the values that—despite our cultural differences—in fact unite us.
Dan Kahan, Donald Braman, Geoffrey Cohen, John Gastil & Paul Slovic, Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition, 34 Law Human Behav 501 (2010).
Dan M. Kahan, Donald Braman, John Gastil, Paul Slovic & C. K. Mertz, Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception, 4 J. Empirical Legal Studies 465 (2007).
It ends up w/ 3 points:
1. When Dan says that “we’ll never know what happened”, I bet he means to include a future where we find a videotape of the event. As the CCP project has found, video doesn’t reduce factual dissensus when the questions being asked are about motive or judgment. Though a video of the shooting might tell us, for example, if Zimmerman approached Martin or visa versa, it almost certainly would leave enough in doubt to permit culturally motivated inferences. That said, missing in Dan’s account is an explanation of why this particular case has caught the public’s eye when other asserted self-defense moments do not. In part I think that it’s the mystery that is driving attention here. Conflicting accounts, new witnesses, allegations of a police cover-up: these are all ammunition for new media stories, which continue to keep the event fresh in the public’s mind.
2. In that light, Dan’s admonition to “cool it” is welcome, but I wonder if it threatens to deprive critics of the police department’s handling of the case of an avenue of public awareness. That is, it might be that informing (exciting) egalitarians about the stand-your-ground law is exactly how Martin’s allies managed to initially capture public attention – which was in turn necessary to encourage florida’s authorities to step in and perform a new investigation of the facts. Without the law, this is merely a case about murky facts and police discretion. With it, it’s a national story.
3. Dan implicitly asserts that the stand your ground lawmaking was intended to provide a booster shot for gun owners. This suggests a testable hypothesis. As compared to states without such laws, but with similar pre-existing cultures of gun ownership, stand your ground states should be losing their firearm traditions more slowly. Though testing expressive effects of laws is notoriously difficult, this seems like a nice experiment that an enterprising VAP might want to take on.
When I say "never know," I don't contemplate discovery of a video. My point was that even if a video did magically appear, it would not settle things -- just as videos didn't resolve factual disputes (couldn't) in Rodney King, Scott v. Harris, all the the recent uses of force against occupy protesters, all the less recent cases (including ones in the S Ct) featuring abortion protesters, etc.
Without a video, things don't become clearer, of course. Remember that in the Goetz case there were multiple witnesses & even a video-taped confession by Goetz in which he stated that his goal was to "murder" the teens whom he shot. Still, to this day, when people read about and discuss the case, they form conflicting perceptions of "what happened." And seems pretty clear we we'll have even less to go on in the Trayvon Martin case than that.
I don't want to spoil the conclusion, but outrage at the (now removed) prosecutor's initial decision not to press charges or investigate further didn't require culturally motivated agitation. Disgust was bi-partisan, or "multi-cultural," at the outset. Now the cultural battle lines are drawn; everyone's got the memo: it's "us" vs. "them."
Interesting, but I actually don't think a test of how the "stand your ground" laws have interacted with gun ownership & culture is the right test for what was intended. People often make their best guesses about the consequences of their action, and then are proven wrong by events. The best proof is just to talk to the people involved. I don't think the NRA has really even concealed its motivation for pushing the "stand your ground" laws.
Found another really great book on-line:
But anyway, the work examines Dutch scientists' impressions of how their work and expertise were received in various public policy debates, including ones on H1N1 vaccination, flood control, and HPV vaccination of adolescent girls.
The analyses are based on "biographical narrative." At the beginning of the work, he explains this method, which involves analytically motivated synthesis of interviews with the scientists, supplemented with other materials, and presented in a form that uses story-telling elements not typical at all for social science work (unlike typical ethnography, the voice is much more internal, almost "first person").
I was really interested in vR's discussion of HPV, an issue the CCP group has also studied. I hadn't realized that the issue was controversial in the Netherlands, too (likely I should be embarrassed to say that). I did know that England didn't have any trouble implementing a national immunization program, so there are definitely some great lessons to be learned through comparative study.
Also hadn't realized that there was political dispute over expert flood control advice in the Netherlands. Actually, efficient flood management in Holland & other regions of the country is often offered as an example of what the successful integration of science into policymaking is supposed to look like!
Thanks to van Rijswoud & Radboud University for making his work widely available & at no charge!
If one were to judge from the media coverage—the dueling depictions of the characters of the shooter and his victim; the minute dissections of fragmentary witness statements; the “expert” voice-identification of screams picked up in the background of a 911 call; the high-resolution scrutiny of low-resolution of video footage of the shooter in police custody that reveal the existence/absence of telltale wounds—one would think that the significance of the Trayvon Martin case turns (or ultimately will turn) decisively on the facts.
In actuality, the opposite is true: the significance we attach to the case will determine our perception of the facts; and because what it signifies turns on cultural meanings that divide our society, the members of different groups will form highly opposed understandings of what happened that terrible night.
Does that mean it’s pointless to be discussing the case?
On the contrary. In my view, the public agitation the case has provoked is evidence of how important it is for us to have a public conversation about the diversity of our cultural outlooks and their relation to law, and that this case is an ideal occasion for addressing that issue.
But if we insist that the discussion take the form of competing, culturally partial (and even culturally partisan) renditions of the facts, we are highly unlikely to engage the real issues in a universally meaningful way. And in that circumstance, we can be sure that the sources of agitation will persist.
I have more to say than it makes sense to put in one post. So regard this as installment 1 of 3.
1. Meanings are cognitively prior to fact
The Trayvon Martin case, polls unsurprisingly reveal, divides people along cultural lines.
In this sense, it is very much like a host of other high-profile types of cases: public altercations leading to a mixed-race killing (think Bernard Goetz and Howard Beach); the slaying (or mutilation; think Lorena Bobbitt) of sleeping men by female partners who allege chronic abuse; the prosecutions (William Kennedy Smith)—or not (Duke lacrosse)—of men alleged to have disregarded women's verbal resistance to sexual intercourse; forceful arrests of political protestors (Occupy Wall Street; Operation Rescue) pepper sprayed by police—or of fleeing drivers whose bodies are broken by the impact of their crashing cars (Scott v. Harris) or the fusillade of baton blows of their pursuers (Rodney King).
CCP has conducted experimental studies of cases like these. What we have found, in all of these contexts, is that people unconsciously form perceptions of fact that reflect their stance on the cultural meanings the cases convey.
Those committed to norms of honor and self-reliance, on the one hand, and those who value equality and collective concern, on the other; those who believe women warrant esteem for mastery of traditionally female domestic roles and those who believe women as well as men should be conferred status for success in civil society; those who place a premium on respect for authority and those who apprehend the abuse of it as a paramount evil—all see different things in these types of cases, even when they are forming their perceptions on the basis of the same evidence.
Moreover, members of all these groups know that what one sees (or claims to see; each group always suspects the other of disingenuousness) depends on who one is culturally speaking.
As a result, in controversies over these sorts of cases, those on both sides come to view competing factual claims as markers of opposing allegiances. The ultimate resolution of these facts in courts of law, in turn, becomes evidence of who counts and who doesn’t in an our society.
These are identity-threatening conditions. It is the extreme anxiety that they provoke that explains how despite knowing next to nothing about what actually happened—because we have nothing more to go on than factual snippets embroidered with righteous denunciation in the media, or antiseptic renditions of the “facts of the case” in appellate reporters—we nevertheless become filled with passionate certitude about the events. The discovery that others disagree with us fills us with incredulity and rage.
And most extraordinary of all, this same environment of symbolic status competition explains why such disagreement persists in the face of the most compelling forms of evidence of all. Even when we literally see the events with our own eyes—as we do when they are recorded on video, e.g.—cultural cognition assures that we will disagree about we are seeing.
We will disagree, in such instances, with those who hold values different from ours when we watch what we understand to be the same event.
Moreover, we will disagree with those who share our values if, as a result of a hidden experimental manipulation, we start with different impressions of the sort of event (abortion-clinic protest, or anti-war protest) we are watching.
Barely detectable above the cacophony in the Trayvon Martin case are a few lonely voices cautioning us not to jump to conclusions. We don’t really know enough about what happened, they rightly point out, to form such strong opinions.
But the truth is, we’ll never know what happened, because we—the members of our culturally pluralistic society—have radically different understandings of what a case like this means.
The questions are whether it makes sense to talk about that, and if so, what should we be saying?
Dan M. Kahan & Donald Braman, The Self-defensive Cognition of Self-defense, 45 Am Crim Law Rev 1 (2008).
Dan M. Kahan, The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 126 Harv. L. Rev. 1 (2011).
Dan M. Kahan, David A. Hoffman, Donald Braman, Danieli Evans & Jeffrey J. Rachlinski, They Saw a Protest: Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. (forthcoming 2012).
It's remarkable and heartening to see how widespread the influence of the cultural theory of risk has become.
Here are three recent examples of articles that assess the importance of the cutural predispositions for risk and science communication, none of which is about traditional environmental concerns:
- Griffiths, M. & Brooks, D.J. Informing Security Through Cultural Cognition: The Influence of Cultural Bias on Operational Security. Journal of Applied Security Research 7, 218-238 (2012).
Cultural bias will influence risk perceptions and may breed “security complacency,” resulting in the decay of risk mitigation efficacy. Cultural Cognition theory provides a methodology to define how people perceive risks in a grid/group typology. In this study, the cultural perceptions of Healthcare professionals to access control measures were investigated. Collected data were analyzed for significant differences and presented on spatial maps. The results demonstrated correlation between cultural worldviews and perceptions of security risks, indicating that respondents had selected their risk perceptions according to their cultural adherence. Such understanding leads to improved risk management and reduced decay of mitigation strategies.
- Daniel J. Decker, W.F.S., Darrick T. N. Evensen, Richard C. Stedman, Katherine A. McComas,Margaret A. Wild, Kevin T. Castle, and Kirsten M. Leong. Public perceptions of wildlife-associated disease: risk communication matters. Human Wildlife Interactions 6, 112–122 (2012).
Wildlife professionals working at the interface where conflicts arise between people and wild animals have an exceptional responsibility in the long-term interest of sustaining society’s support for wildlife and its conservation by resolving human–wildlife conflicts so that people continue to view wildlife as a valued resource. The challenge of understanding and responding to people’s concerns about wildlife is particularly acute in situations involving wildlife-associated disease and may be addressed through One Health communication. Two important questions arise in this work: (1) how will people react to the message that human health and wildlife health are linked?; and (2) will wildlife-associated disease foster negative attitudes about wildlife as reservoirs, vectors, or carriers of disease harmful to humans? The answers to these questions will depend in part on whether wildlife professionals successfully manage wildlife disease and communicate the associated risks in a way that promotes societal advocacy for healthy wildlife rather than calls for eliminating wildlife because they are viewed as disease-carrying pests. This work requires great care in both formal and informal communication. We focus on risk perception, and we briefly discuss guidance available for risk communication, including formation of key messages and the importance of word choices.
- Kaklauskas, A., et al. Passive house model for quantitative and qualitative analyses and its intelligent system. Energy and Buildings (in press), on-line publication available at http://dx.doi.org/10.1016/j.enbuild.2012.03.008.
The passive house, along with models of its composite parts, has been developed globally. Simulation tools analyze its energy use, comfort, micro-climate, quality of life and aesthetics as well as its technical, economic, legal/regulatory, educational and innovative aspects. Meanwhile the social, cultural, ethical, psychological, emotional, religious and ethnic aspects operating over the course of the existence of a passive house are given minimal attention or are ignored entirely. However, all the aspects mentioned must be analyzed in an integrated manner during the time a passive house is in existence. The authors of this article implemented this goal while they participated in two Intelligent Energy Europe programs, the Northpass and the DES-EDU projects. The Passive house model for quantitative and qualitative analyses and its intelligent system was developed during the time of these projects. The model and intelligent system are briefly described in this article, which ends with a case study.
from Legal Theory Blog (April 1, 2012)...
Kahan on Cultural Metacognition
Dan Kahan (Yale Law School, Cultural Cognition Project) has posted Cultural Metacognition on SSRN. Here is the abstract:
My concern in this Article is to explain the epistemic origins of theoretical disagreement in the study of law. Scholars who agree that the proper object of legal theory is to provide a correct account of the normative and positive foundations of law are still likely to disagree—intensely—about what theories will best achieve these ends. Does fairness or welfare best capture the normative point of law? Are judicial decisions best explained by the strategic interactions of legal officials (e.g., judges, presidents, senators) or are they explained by the norms of legal institutions and the explicit content of legally authoritative texts? Is the effect of tort law best predicted by neoclassical economics or by behavioral economic models? Disagreement about the correct answers to these questions is pervasive among legal theorists.
At first glance, it might seem that such disagreement doesn’t really require much explanation. Theoretical disagreements might be the result of incomplete evidence and the relatively early stage of development of relevant disciplines. As evidence accumulates and theories are refined, we might expect convergence in legal theory. But it turns out that this picture is as simplistic as it is intuitively attractive. Theoretical beliefs on seemingly unconnected subjects (the adequacy of rational actor models in predicting the effect of tort rules and the question whether preference-satisfaction provides ultimate value standards) tend to cohere in familiar ways. Patterns like this do not occur by chance. Instead, they are explained by what I call "cultural metacognition"--the systematic operation of cultural commitments at the metacognitive (or "theoretical") level.
This Article then develops an important application of the theory of cultural metacognition: metacognitive beliefs are themselves the product of cultural cognition. The Article reports the results of a pilot study that investigates the relationship between cultural evaluation of metatheoretical frameworks (or "meta-archetypes") and second order theoretical beliefs (beliefs about the truth or soundness of first order theory statements). The research reveals that relevant cultural differences between two distinct institutionally-structured micro-communities (one clustered in southern Massachusetts and other clustered in southern Connecticut) explain differences in the acceptance of cultural cognition as a first-order theoretical framework. The broad implications of this result for legal theory and metatheory are then explored.
So I was lucky enough to have a person who was curious to know what I thought draw my attention to Gordon Gauchat "Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010," published on-line today in the American Sociological Review.
Gauchat analyzes 35 yrs of responses to the General Social Survey item that measures how much "confidence" the public has in "the scientific community" and finds that the spread between liberals and conservatives has been widening in the last 15 years or so. Indeed, before that, there really wasn't any gap to speak of.
Gauchat had to make some judgment calls about how to carve up his data: e.g., whether & how to aggregate responses to the GSS item (which uses a crappy three-point response measure: "great deal of confidence," "only some" or "hardly any"); how to deal with the shifting proportion of respondents identifying as "liberal" or "conservative" over the time period; whether & how to try to break the data up into discrete time periods in order to assess trends (I suspect people who do time series work might take issue with his strategy); and what variables to include as "controls" in multivariate regressions.
But I think it's clear that the trend he points to is there. And that it's interesting -- indeed, thought provoking.
Here are some thoughts the paper has provoked in me:
1. A tale of two trends. The trend that Gauchat identifies looks pretty similar to the one that public opinion surveys identify in views on climate change. That issue started to polarize people on political/ideological lines sometime close to when conservatives and liberals started to disagree on the GSS "confidence" or "trust in science" item. Compare Gauchat's Figure 1 (which I've cropped at around the point when the trend he identifies starts; the uncropped Figure is in the inset to the right) with a couple of Figures that I've taken from Dunlap, R.E. & McCright, A.M. A Widening Gap: Republican and Democratic Views on Climate Change. Environment 50, 26-35 (2008), who summarize Gallup polling on climate change during this period:
2. Three possible meanings. I'm conjecturing, of course, but I suspect that these two trends are in fact linked. Whether they are is something that would have to assessed with more evidence, of course. And even more important, such assessing would have to be informed by some sort of hypothesis about what the link consists in. Here are three possibilities:
a. The "confidence" item doesn't mean what it says -- it means "how do you feel about climate change?" One possibility is that the political polarization on responses to the GSS item that started in the 1990s is just an indirect measure of the politicization of climate change. That is, as climate change became more salient as a partisan issue, the question "how much confidence do you have in the scientific community" started to bear a politicized resonance that generated the same pattern of responses. On this view, "how confident are you in scientists" is essentially just an indicator of a latent attitude toward climate change. It's also a relatively weak indicator: it doesn't provoke as much division, in fact, as the climate change issues (in Gauchat's Figure, the y-axis is the fraction of conservatives or liberals who selected "great deal of confidence" vs. "only some" or "hardly any" combined).
If conservatives (or a significant number of them) are translating the question "do you trust scientists" into the question "what do you think about climate change," moreover, then the answer isn't a very reliable indicator of how conservatives feel about scientists in general or in nonpoliticized settings.
b. The item means what it says -- and measures the cost that climate change has imposed on the credibility of scientists with conservatives. Alternatively, conservatives are answering the question they are being asked -- and the thing that has caused them to become less trustful generally of science is the climate change controversy. That would be very sad.
c. The item means what it says -- and is the source of climate change politicization. The final possible explanation for the linked trends (or the final one I can think of right now) is that the GSS item measures a genuine and growing distrust of scientists among conservatives by conservatives and that growing distrust is itself what caused conservatives to become distrustful of climate change science in the mid to late 1990s.
That strikes me as the least plausible explanation, actually. Why did conservatives just happen to get distrustful of scientists at that very moment?
Indeed, Gauchat's study would have lent more support to the hypothesis that some dispositional distrust of science is the cause of conservative resistance to climate-change science if he had found that conservatives distrusted scientists well before evidence of climate change started to accumulate. Because conservatives weren't more distrustful of scientists than liberals before the mid 1990s, his data actually undercut the assertion that conservatism is associated with anti-science or closed-minded reasoning styles.
Or so it seems to me; am eager to see how others react. Particularly Chris Mooney, a thoughtful proponent of the "asymmetry thesis" (AT) (i.e., that Republicans or conservatives are more vulnerable to motivated reasoning than Democrats or liberals). Gauchat sees Mooney's earlier "Republican War on Science" (RWoS) thesis -- that Reagan & the Bush Presidencies launched partisan attacks against the scientific community -- as corroborated by his data. But that actually raises the question whether RWoS and AT are consistent!
3. Some additional puzzles if one is trying to make sense of political orientations and dispositions toward science.
a. Liberals have historically "distrusted scientists" on environmental risks. It is a staple of the scientific study of public risk perceptions that "distrust" of science predicts concern over environmental risks -- most prominently, as a historical matter, nuclear waste disposal. Historically, too, the left (liberals, and in cultural theory egalitarians) have been most distrustful of scientists in connection with those issues. More evidence that "distrust of scientists" is often not what it seems -- a general distrust of scientists -- but a (weak) indicator of some general orientation toward the risk-issue du jour.
b. Moderates distrust scientists the most! Gauchat is interested, understandably, in the growing division between conservatives and liberals in the last 15 or so years. But across the entire three-decade period of the study, the group most distrustful has been self-described moderates.
Moreover, historically, more people characterized themselves as "moderates" than as either "liberals" or "conservatives." Conservatives, then, have historically been more trusting than most ordinary, non-partisan citizens.
Recently, conservatives have been increasing and now have basically "caught up" to moderates. Well, because moderates are the most "distrustful," the migration of "moderates" to "conservative" could be expected to increase the proportion of "conservatives" who are "distrustful" on the GSS item.
4. What's the story with religion? It's got to be a different one.
Gauchat also finds that there is a parallel increase in distrust associated with religiosity (measured by church attendance). Of course, that religiosity would predict distrust (or lack of confidence) in scientists is not so surprising (not that I think this is inevitable!). But it isn't obvious that such distrust would have increased over this period.
Gauchat's analysis, moreover, doesn't really make it obvious to me why it occurred. I read Gauchat himself as seeing the trend associated with religion as being of a piece with -- as having the same source, essentially -- as the trend associated with conservativism and distrust of science (viz., Mooney's RWoS thesis).
But in fact, Gauchat's statistical analysis suggests that the association between religiosity and distrust of science occurred independently of the trend involving conservatism and distrust (he doesn't report any interactions between ideology and church attendance). That is, if one was a regular church goer, one became less trustful of scientists over the time period in question whether one was liberal, moderate, or conservative. Did Reagan and Bush cause liberal church goers to become anti-science too!?
I suppose the climate change controversy could be making even highly religious liberals and moderates more distrustful of science -- although in fact, I would be super surprised if this is so, since I know from my own research that highly religious egalitarians are the most concerned of all about climate change risk!
So -- I dunno what's going on. Which I don't mind so much; one can't experience the pleasure of seeing a mystery solved if one is never perplexed.
(This is an aside, but treating religion and ideology as independent variables in a model like this is arguably a bad idea, since religion and conservative ideology are probably common indicators of a latent disposition that predicts science distrust and attitudes toward environmental risks more generally. If they are, the regression estimates for each influence controlling for the other will be unreliable. I will likely post something on the vice of "over-controlling" in studies that try to identify latent dispositional influences on risk perceptions sometime! In any case, it is clear from the raw data that Gauchat's finding on conservatism is not by any means an artifact of this modeling strategy.)
* * *
As I said, thought-provoking study -- one that will make people smarter as they share their reactions to it.
It can be found in all the blog and media reports that construe our CCP studies as empirical proof that "conservatives" are uniquely vunlerable to biased readings of empirical evidence.
I know that some researchers and informed observers hypothesize that motivated reasoning is more strongly associated with conservatism than with liberalism. I've explained (multiple times) why I am not persuaded -- but noted, too, that the issue is one that admits of empirical study by those who are intellectually curious about it.
I'm not that interested in spending my own scarce research time trying to definitively resolve the "asymmetry" question. For, as I've explained, I think that existing studies, including ours, establish very very convincingly that there is a tendency toward biased assessments of empirical evidence across the ideological spectrum (or cultural spectra), and that that problem is more than big enough to be a concern for everyone. Being persuaded of that, I myself would rather work on trying to figure out how this dynamic --which interferes with enlightened self-government and thus harms us all -- can be mitigated.
I have no quarrel with anyone who, after thoughtful and fair-minded engagement with our studies and our interpretations of them, comes to the conclusion that our findings support inferences different from the ones we make on the basis of our data. In fact, I am eager to learn from any such person.
But for the record, I very much do resent it when I am misdescribed as having drawn conclusions I have not drawn by people who have not even read our work (much less misread it because of the sort of "team sports" mentality -- & outright contempt for others-- that obviously drives reporting like this and this).
And I resent it just as much when the dumb & intollerant person doing the mischaracterizing is a conservative who is chortling over a simplistic misreading of our work that supposedly shows that people with liberal views are stupid.
But so as not to leave readers of this post with a biased sampling of the evidence about people's capacity to engage in impartial assessment of empirical evidence, there are also many, many, many, many thoughtful observers of diverse political orientations who get that the pathology of motivated reasoning doesn't discriminate on the basis of ideology.
I gave a presentation today at Harvard Business School in connection with a seminar co-taught by Richard Freeman and Vici Sato on economics of science & innovation. Got lots of great questions & reactions.
The talk (particularly toward the end) describes a "two channel communication strategy" as a device for counteracting the distorting effect of cultural cognition.
The idea is that ordinary citizens process information about policy-relevant science along two channels. The first (Channel 1) transmits the content of such science -- that is, the conclusions it supports about how the world works and how it can be made to work better. The second (Channel 2) conveys the cultural meaning of that information -- and in particular whether assenting to the validity of it coheres with a person's defining group commitments.
Science communication can be effective only if the messages transmitted on both channels mesh with one another. If the information being transmitted along Channel 2-- the meaning channel -- threatens a person's cultural identity, then various mechanisms of cultural cognition will block out receipt of the content being transmitted along Channel 1, no matter how clear that information is. If the meaning signal is culturally congenial, however, then ordinary individuals will give it open-minded consideration even if it is contrary to their culturally grounded prior beliefs.
Our study on message framing and geoengineering supplies empirical support for using the two-channel model to reduce cultural polarization over climate change science.
In the talk, I present evidence from that study, but I also connect the two-channel strategy more systematically to a general model of how cultural cognition interacts with all manner of information processing. Will likely write up a paper along those lines in near future.
For now-- slides.
Slides from today's class in my Harvard Law School criminal law course.
Presents individual-level mock juror data from Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases, 158 U. Pa. L. Rev. 729 (2010) and associated jury-verdict simulations generated by Maggie Wittlin's amazing Jurysim program.
Described in her Results of Deliberation paper (which also has Stata code for the program), Jurysim makes it possible to estimate the likelihood that a jury drawn from a particular "venire" -- i.e., a pool of prospective jurors whose demographics are specified by the user. Basically, it's a nested set of simulations--one for selecting 1000 juries, another for computing each individual juror's pre-deliberation or first-ballot vote, and then another for determining the outcome of deliberations (i.e., the verdict) given the first-ballot vote for each jury's individual members.... Yow, zoiks!
I'd say I know 100x more about what the data in Culture, Cognition & Consent & Whose Eyes Are You Going to Believe? Scott v. Harris and the Perils of Cognitive Illiberalism, 122 Harv. L. Rev. 837 (2009), another study that MW features in her paper, mean w/ the benefit of MW's simulations.
So Chris Mooney devastatingly tags the authors of the "World Government" Manifesto in Science for ignoring science -- the vast body of empirical work on effective science communication. CM criticizes the Gang of 32 (by my count) for failing to think about how the manner in which they framed their argument radiated the very egalitarian-communitarian cultural meanings that provoke suspicion and distrust of climate science on the part of a large segment of the population in the US, the UK, and other democratic nations.
I couldn't have said it better -- indeed, couldn't have said it nearly as well as CM, b/c I merely study science communication, an activity that is in fact quite different from communicating science (including the science of science communication)-- something that CM is a master of.
But I think there is something that the Gang 32 got right, too, and I want CM & other master science communicators to make this part of their message about the Manifesto's shortcomings.... So let me try to get the point out in my own way of putting things, at which point they can do what they do (assuming they agree with me).
As I did in my initial post, I want to juxtapose the Gang of 32's World Government Manifesto with last week's Parliamentary testimony by UK scientists in support of geoengineering research. Their "frame" included one element in common with that used by the Gang of 32 -- viz., the assertion that we really need to do something radical, because incremental regulation by treaties etc. just isn't going to work.
Granted, the UK scientists were sticking to what they know: the need for & feasibility of a technological intervention to counteract climate change. Good for them.
But the geopolitical issues for their geoengineering proposal are also staggering. The UK -- or the US & UK -- can't possibly expect the world to stand by passively as they unilaterally implement technologies for self-consciously regulating the climate of the earth! Ain't gonna happen.
Thus, at the same time that natural scientists are applying their unique expertise to identify dramatic but technologically and ecomomically feasible strategies for ameliorating the risks we face, other experts are going to have apply their special knowledge and methods to steer us toward some pretty significant and dramatic breakthroughs in global governance. So we better get smart about that too -- about what's possible, about what sorts of things we should communicate, & how, on the need for appropriate kinds of coordination. Otherwise, the science that can help us deal w/ the problems we genuinely face will be wasted....
So sure, criticize the Gang of 32 for being naive, for lacking humility, for ironically not being very scientific in holding forth in this way (I'm sure a lot of political scientists are cringing too). But they are actually right in substance.
What their misadventure really illustrates is that enabling democratic societies to protect themselves from risk -- environmental ones, but lots of others too, e.g., those associated with terrorism and with infectious diseases -- demands the effective integration of natural science with the sciences of public administration and science communication.
That's the message that science communicators like Chris Mooney are uniquely situated to help everyone get! So get to it, CM!
And of course, I mean just keep it up, since CM & many other of today's excellent science communicators clearly do get this!
I don't want to say a lot about these -- just enough to stimulate reflection about the significance, the meanings of proposals so different. Thus:
1. Which one of these proposals is more likely to "work"?
2. Which one is more "realistic"?
3. Who is likely to answer "proposal 1," who "proposal 2," to above questions -- & why?
4. If proposals like these are made a conspicuous part of public discussion, what effect is each likely to have on public perceptions of the risk of climate change and the importance of taking steps to address the risks that it poses?
Proposal 1: World Government
Human societies must now change course and steer away from critical tipping points in the Earth system that might lead to rapid and irreversible change (3). This requires fundamental reorientation and restructuring of national and international institutions toward more effective Earth system governance and planetary stewardship.
... As a general conclusion, our work indicated that incremental change (6)—the main approach since the 1972 Stockholm Conference on the Human Environment—is no longer sufficient to bring about societal change at the level and with the speed needed to mitigate and adapt to Earth system transformation. Structural change in global governance is needed, both inside and outside the UN system and involving both public and private actors.
... Such a reform of the intergovernmental system—which is at the center of the 2012 Rio Conference—will not be the only level of societal change nor the only type of action that is needed toward sustainability. Changes in the behavior of citizens, new engagement of civil society organizations, and reorientation of the private sector toward a green economy, are all crucial to achieve progress. Yet, in order for local and national action to be effective, the global institutional framework must be supportive and well designed. We propose a first set of much-needed reforms for effective Earth system governance and planetary stewardship. The 2012 Rio Conference offers an opportunity and a crucial test of whether political will exists to bring about these urgently needed changes.
Proposal 2: Techno-fix
An eminent UK engineer is suggesting building cloud-whitening towers in the Faroe Islands as a "technical fix" for warming across the Arctic.
Scientists told UK MPs this week that the possibility of a major methane release triggered by melting Arctic ice constitutes a "planetary emergency".
Wave energy pioneer Stephen Salter has shown that pumping seawater sprays into the atmosphere could cool the planet.
The Edinburgh University academic has previously suggested whitening clouds using specially-built ships....
For each of the last four years, the September minimum has seen about two-thirds of the average cover for the years 1979-2000, which is used a baseline. The extent covered at other times of the year has also been shrinking.
What more concerns some scientists is the falling volume of ice.
Peter Wadhams, professor of ocean physics at Cambridge University, presented an analysis drawing on data and modelling from the PIOMAS ice volume project at the University of Washington in Seattle.
It suggests, he said, that Septembers could be ice-free within just a few years....
The field of implementing technical climate fixes, or geo-engineering, is full of controversy, and even those involved in researching the issue see it as a last-ditch option, a lot less desirable than constraining greenhouse gas emissions.
"Everybody working in geo-engineering hopes it won't be needed - but we fear it will be," said Prof Salter.
Depending on the size and location, Prof Salter said that in the order of 100 towers would be needed to counteract Arctic warming.
However, no funding is currently on the table for cloud-whitening. A proposal to build a prototype ship for about £20m found no takers, and currently development work is limited to the lab.
Came across this cool book on using psychology to promote environment-friendly behavior.
Some of the things that make it cool:
1. It presents behaviorally realistic synthesis of social norms, emotions, & reciprocity, on the one hand, and mechanisms of risk perception/cognition, on the other.
2. It strikes a nice balance between exposition/analysis and programmatic advice.
3. It is well written & draws on lots of interesting sources.
4. The author is distributing .pdf version for free -- a gesture that provokes motive to reciprocate by producing and sharing knowledge in turn (a big theme of the book is the potential of pro-social behavior to reproduce itself by furnishing an inspiring model).
This is 2d installment in this series (actually, I'm negotiating w/ several companies that saw the last post & want to produce "Scientists of science communication trading cards"!)
3. Ellen Peters.
Peters, a social psychologist at the Ohio State University, is a leading scholar of risk perception. A(nother) student of Paul Slovic, Peters's specialty (I'd say) is detecting how diverse cognitive mechanisms relate to one another. E.g., she has done important studies establishing that "affect"--itself (Slovic and others show) a central element of myriad risk-perception heuristics--is a mediator of cultural worldviews, which determine the valence (positive or negative) of affective responses, thereby generating individual differences in risk perception.
Recently, Peters has been engaged in pathbreaking work on numeracy, which refers to the capacity (disposition, really) to make sense of quantitative information and engage in quantitative reasoning. The important -- indeed, startling -- insight of her work there is that numeracy and affect are complimentary mental processes. That is, affect, rather than being a heuristic substitute for numeracy, is in fact a perceptive faculty calibrated by, and integral to the employment of quantitative reasoning. High numeracy individuals, her experiments show, do not rely on affect less than low numeracy ones but rather experience it in a more reliably discerning fashion when evaluating the expected value of opportunities for gain and loss. Numeracy, it would appear, effectively "trains" affect, which thereafter operates as an efficient scout, telling a person when he or she should engage in more effortful quantitative processing; people low in numeracy are distinguished not by greater reliance on affect, but by inchoate, confused affect.
This is a very different picture, I'd say, from the (now) dominant "system 1/system 2" conception of dual process reasoning. That framework envisions a discrete and hierarchical relationship between unconscious, affective forms of reasoning (System 1) and conscious, algorithmic ones (System 2). Peters's work, in contrast, suggests that affect and numeracy are integrated and reciprocal--that each operates on the other and that together they make complimentary contributions to sound decisionmaking.
Interestingly, though, people with high numeracy can also experience distinctive kinds of bias. E.g., they will rate transactions that offer a high probability of substantial gain versus a low probability of a small loss as more attractive than transactions that offer a high probability of substantial gain versus a small probability of an outcome involving no change (positive or negative) in welfare. The reason is that the contrast between a high probability of gain and small probability of loss is more affectively arousing than the contrast between high probability of gain and nothing. But you actually have to be pretty good with numbers to receive this false affective signal! In other words, there are some kinds of attractive specious inferences that presuppose fairly high quantitative reasoning capacity.
Some key readings:
1. Peters, E. The Functions of Affect in the Construction of Preferences. in The construction of preference (eds. Lichtenstein, S. & Slovic, P.) 454-463 (Cambridge University Press, Cambridge ; New York, 2006).
4. Peters, E., Slovic, P. & Gregory, R. The role of affect in the WTA/WTP disparity. Journal of Behavioral Decision Making 16, 309-330 (2003).
5. Peters, E., et al. Intuitive numbers guide decisions. Judgment and Decision Making 3, 619-635 (2008).
6. Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).
7. Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).
8. Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).
9. Slovic, P. & Peters, E. The importance of worldviews in risk perception Risk Decision and Policy 3, 165-170 (1998).
10. Peters, E. & Slovic, P. Affective asynchrony and the measurement of the affective attitude component. Cognition Emotion 21, 300-329 (2007).
That's the title of a talk I gave today at Arizona State Law School & yesterday at the University of Arizona Law School.
The talk, which I gave to faculty-workshop audiences who had read They Saw a Protest, first offers an analytically precise account of how cultural cognition can defeat Bayesian updating. It then identifies how this form of cognitive decisionmaking bias generates "cognitive illiberalism," a legal and political decisionmaking bias that poses the same threat to constitutional freedoms as consciously illiberal forms of state action.
Probably will write this up as short paper. For now--slides here.
Now have written a paper that develops this position.
reposted from Talkingclimate.org
I’m going to resist the academic’s instinct to start with a long, abstract discussion of "cultural cognition' and the theory behind it. Instead, I’m going to launch straight into a practical argument based on this line of research. My hope is that the argument will give you a glimpse of the essentials—and an appetite for delving further.
The argument has to do with the contribution that misinformation makes to the dispute over climate change. I want to suggest that the normal account of this is wrong.
The normal account envisions, in effect, that the dispute is fueled by an external force—economic interest groups, say—inundating a credulous public with inaccurate claims about risk.
I would turn this account more or less on its head: the climate change dispute, I want to argue, is fueled by a motivated public whose (unconscious) desire to form certain perceptions of risk makes it possible (and profitable) to misinform them.
As evidence, consider an experiment that my colleagues at the Cultural Cognition Project and I did.
In it, we asked the participants (a representative sample of 1500 U.S. adults) to examine the credentials of three scientists and tell us whether they were “knowledgeable and credible experts” about one or another risk—including climate change, disposal of nuclear wastes, and laws allowing citizens to carry concealed weapons in public. Each of the scientists (they were fictional; we told subjects that after the study) had a Ph.D. in a seemingly relevant field, was on the faculty of an elite university, and was identified as a member of the National Academy of Sciences.Whether study subjects deemed the featured scientists to be “experts,” it turned out, was strongly predicted by two things: the position we attributed to the scientists (in short book excerpts); and the cultural group membership of the subject making the determination.
Where the featured scientist was depicted as taking what we called the “high risk” position on climate change (it’s happening, is caused by humans, will have bad consequences, etc.) he was readily credited as an “expert” by subjects with egalitarian and communitarian cultural values, a group that generally sees environmental risks as high, but not by subjects with hierarchical and individualistic values, a group that generally sees environmental risks as low. However, the positions of these groups shifted—hierarchical individualists more readily saw the same scientist as an “expert,” while egalitarian comuniatarians did not—when he was depicted as taking a “low risk” position (climate change is uncertain, models are unreliable, more research necessary).
The same thing, moreover, happened with respect to the scientists who had written books about nuclear power and about gun control: subjects were much more likely to deem the scientist an “expert” when he advanced the risk position that predominated in the subjects’ respective cultural groups than when he took the contrary position.
This result reflects a phenomenon known as “motivated cognition.” People are said to be displaying this bias when they unconsciously fit their understandings of information (whether scientific data, arguments, and even sense impressions) to some goal or end extrinsic to forming an accurate answer.
The interest or goal here was the stake study subjects had in maintaining a sense of connection and solidarity with their cultural groups. Hence, the label cultural cognition, which refers to the tendency of individuals to form perceptions of risk that promote the status of their groups and their own standing within them.
Cultural cognition generates my unconventional “motivated public” model of misinformation. The subjects in our study weren’t pushed around by any external misinformation provider. Furnished the same information, they sorted themselves into the patterns that characterize public divisions we see on climate change.
This kind of self-generated biased sampling—the tendency to count a scientist as an “expert” when he takes the position that fits one’s group values but not otherwise—would over time be capable all by itself of generating a state of radical cultural polarization over what “expert scientific consensus” is on issues like climate change, nuclear power, and gun control.
In this environment, does the deliberate furnishing of misinformation add anything? Certainly.
But the desire of the public to form culturally congenial beliefs supplies one of the main incentives to furnishing them with misleading information. To protect their cultural identities, individuals more readily seek out information that supports than that challenges the beliefs that predominate in their group. The motivated public’s desire for misinformation thus makes it profitable to become a professional misinformer—whether in the media or in the world of public advocacy.
Other actors will have their own economic interest in furnishing misinformation. How effective their efforts will be, however, will still depend largely on how culturally motivated people are to accept their message. If this weren’t so, the impact of the prodigious efforts of commercial entities to convince people that climate change is a hoax, that nuclear power is safe, and that concealed-carry laws reduce crime would wear away the cultural divisions on these issues.
The reason that individuals with different values are motivated to form opposing positions on these issues is the symbolic association of them with competing groups. But that association can be created just as readily by accurate information as by misinformation if authority figures identified with only one group end up playing a disproportionate role in communicating it.
One can’t expect to win an “information war of attrition” in an environment like this. Accurate information will simply bounce off the side that is motivated to resist it.
So am I saying, then, that things are hopeless? No, far from it.
But the only way to devise remedies for these pathologies is to start with an accurate understanding of why they occur.
The study of cultural cognition shows that the conventional view of misinformation (external source, credulous public) is inaccurate because it fails to appreciate how much more likely misinformation is to occur and to matter when scientific knowledge becomes entangled in antagonistic cultural meanings.
How to free science from such entanglements is something that the study of cultural cognition can help us to figure out too.
I hope you are now interested in knowing how -- and in just knowing more!
Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) 725-760 (Springer London, Limited, 2012).
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
First, some common sense:
Let's assume self-interest explains the formation of beliefs about climate change by ordinary members of the public (I'm very happy to do that). In that case, we should expect the economic impact of climate change & proposed climate change policies on the public's perception of climate change risks to be 0.00, and the impact of cultural identity to be [some arbitrarily large number].
What the ordinary member of the public believes about climate change won't have any impact on the threat it poses to the environment or on the policies society adopts to repel that threat. The same is true about how he or she votes in democratic elections or behaves as a consumer. As an individual, he or she just isn't consequential enough to matter.
Accordingly, there is no reason to expect much if any correlation between, say, economic class, etc., and climate change risk perception.
In contrast, what an ordinary individual believes and says about climate change can have a huge impact on her interactions with her peers. If a professor on the faculty of a liberal university in Cambridge Massachusetts starts saying "cliamte change is ridiculous," he or she can count on being ostracized and vilified by others in the academic community. If the barber in some town in South Carolina's 4th congressional district insists to his friends & neighbors that they really should believe the NAS on climate change, he will probably find himself twiddling his thumbs rather than cutting hair.
It's in people's self-interest to form beliefs that connect rather than estrange them from those whose good opinion they depend on (economically, emotionally, and otherwise). As a result, we should expect individuals' cultural outlooks to have a very substantial impact on their climate change risk perceptions.
(For elaboration of this argument, see CCP working paper No. 89, Tragedy of the Risk Perceptions Commons.)
Second, some data:
I have constructed some regression models to examine the impact of household income (hh_income) and cultural worldviews (hfac for hierarchy and ifac for individualism) on climate change risk perceptions (z_GWRISK; for explanation of that measure, see here). The data come from a nationally representative survey of 1500 US adults conducted by the Cultural Cognition Project with a grant from the National Science Foundation. To see the regression outputs, click on the thumbnail to the right.
The analyses show, first, that differences in income have a very small negative impact on climate change risk perceptions (B = -0.07, p < 0.01) when consdired on its own (model 1).
Second, the analyses show that cultural worldviews have a very large impact -- a typical egalitarian communitarian and a typical hierarchical individualist are separated by about 1.6 standard deviations on the risk perception measure -- controlling for income (model 2). When cultural worldviews are controlled for, income turns out to have an effect that is practically nil (B = -0.02, p = 0.56).
But wait: the third thing the analyses show is that income does have a modest effect -- one that is conditional on survey respondents' cultural worldviews. As they become wealthier, egalitarian communitarians become slightly more concerned about climate change, while hierarchical individualists become less (Model 3).
Bottom line: economic self-interest doesn't matter; cultural identity self-interest does.