follow CCP

Recent blog entries
Saturday
Apr072012

Another cool book: van Rijswoud, Public faces of science

Found another really great book on-line:

Erwin van Rijswoud, Public faces of science: Experts and identity work in the boundary zone of science, policy and public debate (Radboud University Nijmegen, 2012).

It's actually van Rijswould's doctoral dissertation.

But anyway, the work examines Dutch scientists' impressions of how their work and expertise were received in various public policy debates, including ones on H1N1 vaccination, flood control, and HPV vaccination of adolescent girls.

The analyses are based on "biographical narrative." At the beginning of the work, he explains this method, which involves analytically motivated synthesis of interviews with the scientists, supplemented with other materials, and presented in a form that uses story-telling elements not typical at all for social science work (unlike typical ethnography, the voice is much more internal, almost "first person"). 

I was really interested in vR's discussion of HPV, an issue the CCP group has also studied. I hadn't realized that the issue was controversial in the Netherlands, too (likely I should be embarrassed to say that). I did know that England didn't have any trouble implementing a national immunization program, so there are definitely some great lessons to be learned through comparative study.

Also hadn't realized that there was political dispute over expert flood control advice in the Netherlands. Actually, efficient flood management in Holland & other regions of the country is often offered as an example of what the successful integration of science into policymaking is supposed to look like!

Thanks to van Rijswoud & Radboud University for making his work widely available & at no charge!

Friday
Apr062012

What does the Trayvon Martin case mean? What *should* it mean? part 1

If one were to judge from the media coverage—the dueling depictions of the characters of the shooter and his victim; the minute dissections of fragmentary witness statements; the “expert” voice-identification of screams picked up in the background of a 911 call; the high-resolution scrutiny of  low-resolution of video footage of the shooter in police custody that reveal the existence/absence of telltale wounds—one would think that the significance of the Trayvon Martin case turns (or ultimately will turn) decisively on the facts.

In actuality, the opposite is true: the significance we attach to the case will determine our perception of the facts; and because what it signifies turns on cultural meanings that divide our society, the members of different groups will form highly opposed understandings of what happened that terrible night.

Does that mean it’s pointless to be discussing the case?

On the contrary. In my view, the public agitation the case has provoked is evidence of how important it is for us to have a public conversation about the diversity of our cultural outlooks and their relation to law, and that this case is an ideal occasion for addressing that issue.

But if we insist that the discussion take the form of competing, culturally partial (and even culturally partisan) renditions of the facts, we are highly unlikely to engage the real issues in a universally meaningful way. And in that circumstance, we can be sure that the sources of agitation will persist.

I have more to say than it makes sense to put in one post.  So regard this as installment 1 of 3.

1. Meanings are cognitively prior to fact

The Trayvon Martin case, polls unsurprisingly reveal, divides people along cultural lines.

In this sense, it is very much like a host of other high-profile types of cases: public altercations leading to a mixed-race killing (think Bernard Goetz and Howard Beach); the slaying (or mutilation; think Lorena Bobbitt) of sleeping men by female partners who allege chronic abuse; the prosecutions (William Kennedy Smith)—or not (Duke lacrosse)—of men alleged to have disregarded women's verbal resistance to sexual intercourse; forceful arrests of political protestors (Occupy Wall Street; Operation Rescue) pepper sprayed by police—or of fleeing drivers whose bodies are broken by the impact of their crashing cars (Scott v. Harris) or the fusillade of baton blows of their pursuers (Rodney King).

CCP has conducted experimental studies of cases like these. What we have found, in all of these contexts, is that people unconsciously form perceptions of fact that reflect their stance on the cultural meanings the cases convey.

Those committed to norms of honor and self-reliance, on the one hand, and those who value equality and collective concern, on the other; those who believe women warrant esteem for mastery of traditionally female domestic roles and those who believe women as well as men should be conferred status for success in civil society; those who place a premium on respect for authority and those who apprehend the abuse of it as a paramount evil—all see different things in these types of cases, even when they are forming their perceptions on the basis of the same evidence.

Moreover, members of all these groups know that what one sees (or claims to see; each group always suspects the other of disingenuousness) depends on who one is culturally speaking.

As a result, in controversies over these sorts of cases, those on both sides come to view competing factual claims as markers of opposing allegiances.  The ultimate resolution of these facts in courts of law, in turn, becomes evidence of who counts and who doesn’t in an our society.

These are identity-threatening conditions. It is the extreme anxiety that they provoke that explains how despite knowing next to nothing about what actually happened—because we have nothing more to go on than factual snippets embroidered with righteous denunciation in the media, or antiseptic renditions of the “facts of the case” in appellate reporters—we nevertheless become filled with passionate certitude about the events. The discovery that others disagree with us fills us with incredulity and rage.

And most extraordinary of all, this same environment of symbolic status competition explains why such disagreement persists in the face of the most compelling forms of evidence of all. Even when we literally see the events with our own eyes—as we do when they are recorded on video, e.g.—cultural cognition assures that we will disagree about we are seeing

We will disagree, in such instances, with those who hold values different from ours when we watch what we understand to be the same event.

Moreover, we will disagree with those who share our values if, as a result of a hidden experimental manipulation, we start with different impressions of the sort of event (abortion-clinic protest, or anti-war protest) we are watching.

Barely detectable above the cacophony in the Trayvon Martin case are a few lonely voices cautioning us not to jump to conclusions. We don’t really know enough about what happened, they rightly point out, to form such strong opinions.

But the truth is, we’ll never know what happened, because we—the members of our culturally pluralistic society—have radically different understandings of what a case like this means.

The questions are whether it makes sense to talk about that, and if so, what should we be saying?

References

Dan M. Kahan & Donald Braman, The Self-defensive Cognition of Self-defense, 45 Am Crim Law Rev 1 (2008).

Dan M. Kahan, The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 126 Harv. L. Rev. 1 (2011)

Dan M. Kahan, Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases, 158 University of Pennsylvania Law Review 729 (2010).

 

Dan M. Kahan, David A. Hoffman, Donald Braman, Danieli Evans & Jeffrey J. Rachlinski, They Saw a Protest: Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. (forthcoming 2012).

Mark Kelman, Reasonable Evidence of Reasonableness, 17 Critical Inquiry 798-817 (1991). 

 

Tuesday
Apr032012

Cultural theory of risk: it's not just about clean air & water

It's remarkable and heartening to see how widespread the influence of the cultural theory of risk has become. 

Here are three recent examples of articles that assess the importance of the cutural predispositions for risk and science communication, none of which is about traditional environmental concerns:

  1. Griffiths, M. & Brooks, D.J. Informing Security Through Cultural Cognition: The Influence of Cultural Bias on Operational Security. Journal of Applied Security Research 7, 218-238 (2012).

    Cultural bias will influence risk perceptions and may breed “security complacency,” resulting in the decay of risk mitigation efficacy. Cultural Cognition theory provides a methodology to define how people perceive risks in a grid/group typology. In this study, the cultural perceptions of Healthcare professionals to access control measures were investigated. Collected data were analyzed for significant differences and presented on spatial maps. The results demonstrated correlation between cultural worldviews and perceptions of security risks, indicating that respondents had selected their risk perceptions according to their cultural adherence. Such understanding leads to improved risk management and reduced decay of mitigation strategies.

     
  2. Daniel J. Decker, W.F.S., Darrick T. N. Evensen, Richard C. Stedman, Katherine A. McComas,Margaret  A. Wild, Kevin T. Castle, and Kirsten M. Leong. Public perceptions of wildlife-associated disease: risk  communication matters. Human Wildlife Interactions 6, 112–122 (2012).

    Wildlife professionals working at the interface where conflicts arise between people and wild animals have an exceptional responsibility in the long-term interest of sustaining society’s support for wildlife and its conservation by resolving human–wildlife conflicts so that people continue to view wildlife as a valued resource. The challenge of understanding and responding to people’s concerns about wildlife is particularly acute in situations involving wildlife-associated disease and may be addressed through One Health communication. Two important questions arise in this work: (1) how will people react to the message that human health and wildlife health are linked?; and (2) will wildlife-associated disease foster negative attitudes about wildlife as reservoirs, vectors, or carriers of disease harmful to humans? The answers to these questions will depend in part on whether wildlife professionals successfully manage wildlife disease and communicate the associated risks in a way that promotes societal advocacy for healthy wildlife rather than calls for eliminating wildlife because they are viewed as disease-carrying pests. This work requires great care in both formal and informal communication. We focus on risk perception, and we briefly discuss guidance available for risk communication, including formation of key messages and the importance of word choices.

     
  3. Kaklauskas, A., et al. Passive house model for quantitative and qualitative analyses and its intelligent system. Energy and Buildings (in press), on-line publication available at http://dx.doi.org/10.1016/j.enbuild.2012.03.008.

    The passive house, along with models of its composite parts, has been developed globally. Simulation tools analyze its energy use, comfort, micro-climate, quality of life and aesthetics as well as its technical, economic, legal/regulatory, educational and innovative aspects. Meanwhile the social, cultural, ethical, psychological, emotional, religious and ethnic aspects operating over the course of the existence of a passive house are given minimal attention or are ignored entirely. However, all the aspects mentioned must be analyzed in an integrated manner during the time a passive house is in existence. The authors of this article implemented this goal while they participated in two Intelligent Energy Europe programs, the Northpass and the DES-EDU projects. The Passive house model for quantitative and qualitative analyses and its intelligent system was developed during the time of these projects. The model and intelligent system are briefly described in this article, which ends with a case study.

Sunday
Apr012012

The only thing that bothers me about this: I'd *never* write a 3-paragraph abstract

from Legal Theory Blog (April 1, 2012)...

 

Kahan on Cultural Metacognition
 

Dan Kahan (Yale Law School, Cultural Cognition Project) has posted Cultural Metacognition on SSRN. Here is the abstract:

    My concern in this Article is to explain the epistemic origins of theoretical disagreement in the study of law. Scholars who agree that the proper object of legal theory is to provide a correct account of the normative and positive foundations of law are still likely to disagree—intensely—about what theories will best achieve these ends. Does fairness or welfare best capture the normative point of law? Are judicial decisions best explained by the strategic interactions of legal officials (e.g., judges, presidents, senators) or are they explained by the norms of legal institutions and the explicit content of legally authoritative texts? Is the effect of tort law best predicted by neoclassical economics or by behavioral economic models? Disagreement about the correct answers to these questions is pervasive among legal theorists. 

    At first glance, it might seem that such disagreement doesn’t really require much explanation. Theoretical disagreements might be the result of incomplete evidence and the relatively early stage of development of relevant disciplines. As evidence accumulates and theories are refined, we might expect convergence in legal theory. But it turns out that this picture is as simplistic as it is intuitively attractive. Theoretical beliefs on seemingly unconnected subjects (the adequacy of rational actor models in predicting the effect of tort rules and the question whether preference-satisfaction provides ultimate value standards) tend to cohere in familiar ways. Patterns like this do not occur by chance. Instead, they are explained by what I call "cultural metacognition"--the systematic operation of cultural commitments at the metacognitive (or "theoretical") level. 

    This Article then develops an important application of the theory of cultural metacognition: metacognitive beliefs are themselves the product of cultural cognition. The Article reports the results of a pilot study that investigates the relationship between cultural evaluation of metatheoretical frameworks (or "meta-archetypes") and second order theoretical beliefs (beliefs about the truth or soundness of first order theory statements). The research reveals that relevant cultural differences between two distinct institutionally-structured micro-communities (one clustered in southern Massachusetts and other clustered in southern Connecticut) explain differences in the acceptance of cultural cognition as a first-order theoretical framework. The broad implications of this result for legal theory and metatheory are then explored.

Thursday
Mar292012

Trend in conservative distrust of scientists: what does it mean? 

So I was lucky enough to have a person who was curious to know what I thought draw my attention to Gordon Gauchat "Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010," published on-line today in the American Sociological Review.

Gauchat analyzes 35 yrs of responses to the General Social Survey item that measures how much "confidence" the public has in "the scientific community" and finds that the spread between liberals and conservatives has been widening in the last 15 years or so.  Indeed, before that, there really wasn't any gap to speak of.

Gauchat had to make some judgment calls about how to carve up his data: e.g., whether & how to aggregate responses to the GSS item (which uses a crappy three-point response measure: "great deal of confidence," "only some" or "hardly any"); how to deal with the shifting proportion of respondents identifying as "liberal" or "conservative" over the time period; whether & how to try to break the data up into discrete time periods in order to assess trends (I suspect people who do time series work might take issue with his strategy); and what variables to include as "controls" in multivariate regressions.

But I think it's clear that the trend he points to is there. And that it's interesting -- indeed, thought provoking.

Here are some thoughts the paper has provoked in me:

1. A tale of two trends. The trend that Gauchat identifies looks pretty similar to the one that public opinion surveys identify in views on climate change. That issue started to polarize people on political/ideological lines sometime close to when conservatives and liberals started to disagree on the GSS "confidence" or "trust in science" item. Compare Gauchat's Figure 1 (which I've cropped at around the point when the trend he identifies starts; the uncropped Figure is in the inset to the right) with a couple of Figures that I've taken from Dunlap, R.E. & McCright, A.M. A Widening Gap: Republican and Democratic Views on Climate Change. Environment 50, 26-35 (2008), who summarize Gallup polling on climate change during this period:

 

2. Three possible meaningsI'm conjecturing, of course, but I suspect that these two trends are in fact linked.  Whether they are is something that would have to assessed with more evidence, of course. And even more important, such assessing would have to be informed by some sort of hypothesis about what the link consists in. Here are three possibilities:

a.  The "confidence" item doesn't mean what it says -- it means "how do you feel about climate change?" One possibility is that the political polarization on responses to the GSS item that started in the 1990s is just an indirect measure of the politicization of climate change. That is, as climate change became more salient as a partisan issue, the question "how much confidence do you have in the scientific community" started to bear a politicized resonance that generated the same pattern of responses. On this view, "how confident are you in scientists" is essentially just an indicator of a latent attitude toward climate change. It's also a relatively weak indicator: it doesn't provoke as much division, in fact, as the climate change issues (in Gauchat's Figure, the y-axis is the fraction of conservatives or liberals who selected "great deal of confidence" vs. "only some" or "hardly any" combined).

If conservatives (or a significant number of them) are translating the question "do you trust scientists" into the question "what do you think about climate change," moreover, then the answer isn't a very reliable indicator of how conservatives feel about scientists in general or in nonpoliticized settings.

b. The item means what it says -- and measures the cost that climate change has imposed on the credibility of scientists with conservatives. Alternatively, conservatives are answering the question they are being asked -- and the thing that has caused them to become less trustful generally of science is the climate change controversy. That would be very sad.

c.  The item means what it says -- and is the source of climate change politicization. The final possible explanation for the linked trends (or the final one I can think of right now) is that the GSS item measures a genuine and growing distrust of scientists among conservatives  by conservatives and that growing distrust is itself what caused conservatives to become distrustful of climate change science in the mid to late 1990s.  

That strikes me as the least plausible explanation, actually. Why did conservatives just happen to get distrustful of scientists at that very moment?  

Indeed, Gauchat's study would have lent more support to the hypothesis that some dispositional distrust of science is the cause of conservative resistance to climate-change science if he had found  that conservatives distrusted scientists well before evidence of climate change started to accumulate. Because conservatives weren't more distrustful of scientists than liberals before the mid 1990s, his data actually undercut the assertion that conservatism is associated with anti-science or closed-minded reasoning styles.

Or so it seems to me; am eager to see how others react. Particularly Chris Mooney, a thoughtful proponent of the "asymmetry thesis" (AT) (i.e., that Republicans or conservatives are more vulnerable to motivated reasoning than Democrats or liberals).  Gauchat sees Mooney's earlier "Republican War on Science" (RWoS) thesis -- that Reagan & the Bush Presidencies launched partisan attacks against the scientific community -- as corroborated by his data. But that actually raises the question whether RWoS and AT are consistent! 

3. Some additional puzzles if one is trying to make sense of political orientations and dispositions toward science.

a. Liberals have historically "distrusted scientists" on environmental risks. It is a staple of the scientific study of public risk  perceptions that "distrust" of science predicts concern over environmental risks -- most prominently, as a historical matter, nuclear waste disposal. Historically, too, the left (liberals, and in cultural theory egalitarians) have been most distrustful of scientists in connection with those issues. More evidence that "distrust of scientists"  is often not what it seems -- a general distrust of scientists -- but a (weak) indicator of some general orientation toward the risk-issue du jour.

b. Moderates distrust scientists the most! Gauchat is interested, understandably, in the growing division between conservatives and liberals in the last 15 or so years. But across the entire three-decade period of the study, the group most distrustful has been self-described moderates.

Moreover, historically, more people characterized themselves as "moderates" than as either "liberals" or "conservatives." Conservatives, then, have historically been more trusting than most ordinary, non-partisan citizens.

Recently, conservatives have been increasing and now have basically "caught up" to moderates. Well, because moderates are the most "distrustful," the migration of "moderates" to "conservative" could be expected to increase the proportion of "conservatives" who are "distrustful" on the GSS item.

4. What's the story with religion? It's got to be a different one. 

Gauchat also finds that there is a parallel increase in distrust associated with religiosity (measured by church attendance). Of course, that religiosity would predict distrust (or lack of confidence) in scientists is not so surprising (not that I think this is inevitable!).  But it isn't obvious that such distrust would have increased over this period.

Gauchat's analysis, moreover, doesn't really make it obvious to me why it occurred. I read Gauchat himself as seeing the trend associated with religion as being of a piece with -- as having the same source, essentially -- as the trend associated with conservativism and distrust of science (viz., Mooney's  RWoS thesis).

But in fact, Gauchat's statistical analysis suggests that the association between religiosity and distrust of science occurred independently of the trend involving conservatism and distrust (he doesn't report any interactions between ideology and church attendance). That is, if one was a regular church goer, one became less trustful of scientists over the time period in question whether one was liberal, moderate, or conservative. Did Reagan and Bush cause liberal church goers to become anti-science too!? 

I suppose the climate change controversy could be making even highly religious liberals and moderates more distrustful of science -- although in fact, I would be super surprised if this is so, since I know from my own research that highly religious egalitarians are the most concerned of all about climate change risk!

So -- I dunno what's going on. Which I don't mind so much; one can't experience the pleasure of seeing a mystery solved if one is never perplexed.

(This is an aside, but treating religion and ideology as independent variables in a model like this is arguably a bad idea, since religion and conservative ideology are probably common indicators of a latent disposition that predicts science distrust and attitudes toward environmental risks more generally. If they are, the regression estimates for each influence controlling for the other will be unreliable. I will likely post something on the vice of "over-controlling" in studies that try to identify latent dispositional influences on risk perceptions sometime! In any case, it is clear from the raw data that Gauchat's finding on conservatism is not by any means an artifact of this modeling strategy.)

* * * 

As I said, thought-provoking study -- one that will make people smarter as they share their reactions to it.

Nice work!

 

Saturday
Mar242012

Empirical evidence that liberals misconstrue empirical evidence to suit their ideology

It can be found in all the blog and media reports that construe our CCP studies as empirical proof that "conservatives" are uniquely vunlerable to biased readings of empirical evidence.

I know that some researchers and informed observers hypothesize that motivated reasoning is more strongly associated with conservatism than with liberalism.  I've explained (multiple times) why I am not persuaded -- but noted, too, that the issue is one that admits of empirical study by those who are intellectually curious about it.

I'm not that interested in spending my own scarce research time trying to definitively resolve the "asymmetry" question. For, as I've explained, I think that existing studies, including ours, establish very very convincingly that there is a tendency toward biased assessments of empirical evidence across the ideological spectrum (or cultural spectra), and that that problem is more than big enough to be a concern for everyone. Being persuaded of that, I myself would rather work on trying to figure out how this dynamic --which interferes with enlightened self-government and thus harms us all -- can be mitigated.

I have no quarrel with anyone who, after thoughtful and fair-minded engagement with our studies and our interpretations of them, comes to the conclusion that our findings support inferences different from the ones we make on the basis of our data. In fact, I am eager to learn from any such person. 

But for the record, I very much do resent it when I am misdescribed as having drawn conclusions I have not drawn by people who have not even read our work (much less misread it because of the sort of "team sports" mentality -- & outright contempt for others-- that obviously drives reporting like this and this).

And I resent it just as much when the dumb & intollerant person doing the mischaracterizing is a conservative who is chortling over a simplistic misreading of our work that supposedly shows that people with liberal views are stupid.

But so as not to leave readers of this post with a biased sampling of the evidence about people's capacity to engage in impartial assessment of empirical evidence, there are also manymany, manymany thoughtful observers of diverse political orientations who get that the pathology of motivated reasoning doesn't discriminate on the basis of ideology.

Friday
Mar232012

Two channel solution to the science communication problem (slide show)

I gave a presentation today at Harvard Business School in connection with a seminar co-taught by Richard Freeman and Vici Sato on economics of science & innovation. Got lots of great questions & reactions.

The talk (particularly toward the end) describes a "two channel communication strategy" as a device for counteracting the distorting effect of cultural cognition.

The idea is that ordinary citizens process information about policy-relevant science along two channels. The first  (Channel 1) transmits the content of such science -- that is, the conclusions it supports about how the world works and how it can be made to work better. The second (Channel 2) conveys the cultural meaning of that information -- and in particular whether assenting to the validity of it coheres with a person's defining group commitments.

Science communication can be effective only if the messages transmitted on both channels mesh with one another. If the information being transmitted along Channel 2-- the meaning channel -- threatens a person's cultural identity, then various mechanisms of cultural cognition will block out receipt of the content being transmitted along Channel 1, no matter how clear that information is. If the meaning signal is culturally congenial, however, then ordinary individuals will give it open-minded consideration even if it is contrary to their culturally grounded prior beliefs.

Our study on message framing and geoengineering supplies empirical support for using the two-channel model to reduce cultural polarization over climate change science.

In the talk, I present evidence from that study, but I also connect the two-channel strategy more systematically to a general model of how cultural cognition interacts with all manner of information processing.  Will likely write up a paper along those lines in near future.

For now-- slides

Tuesday
Mar202012

Data-driven simulation of jurors & *juries* in acquaintance rape case

Slides from today's class in my Harvard Law School criminal law course.

 Presents individual-level mock juror data from Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases, 158 U. Pa. L. Rev. 729 (2010) and associated jury-verdict simulations generated by Maggie Wittlin's amazing Jurysim program.

Described in her Results of Deliberation paper (which also has Stata code for the program), Jurysim makes it possible to estimate the likelihood that a jury drawn from a particular "venire" -- i.e., a pool of prospective jurors whose demographics are specified by the user. Basically, it's a nested set of simulations--one for selecting 1000 juries, another for computing each individual juror's pre-deliberation or first-ballot vote, and then another for determining the outcome of deliberations (i.e., the verdict) given the first-ballot vote for each jury's individual members.... Yow, zoiks!

I'd say I know 100x more about what the data in Culture, Cognition &  Consent & Whose Eyes Are You Going to Believe? Scott v. Harris and the Perils of Cognitive Illiberalism, 122 Harv. L. Rev. 837 (2009), another study that MW features in her paper, mean w/ the benefit of MW's simulations. 

 

Tuesday
Mar202012

What the Gang of 32 at Science got wrong--and what they got right...

So Chris Mooney devastatingly tags the authors of the "World Government" Manifesto in Science for ignoring science -- the vast body of empirical work on effective science communication.  CM criticizes the Gang of 32 (by my count) for failing to think about how the manner in which they framed their argument radiated the very egalitarian-communitarian cultural meanings that provoke suspicion and distrust of climate science on the part of a large segment of the population in the US, the UK, and other democratic nations.

I couldn't have said it better -- indeed, couldn't have said it nearly as well as CM, b/c I merely study science communication, an activity that is in fact quite different from communicating science (including the science of science communication)-- something that CM is a master of. 

But I think there is something that the Gang 32 got right, too, and I want CM & other master science communicators to make this part of their message about the Manifesto's shortcomings.... So let me try to get the point out in my own way of putting things, at which point they can do what they do (assuming they agree with me).

As I did in my initial post, I want to juxtapose the Gang of 32's World Government Manifesto with last week's Parliamentary testimony by UK scientists in support of geoengineering research. Their "frame" included one element in common with that used by the Gang of 32  -- viz., the assertion that we really need to do something radical, because incremental regulation by treaties etc. just isn't going to work. 

Granted, the UK scientists were sticking to what they know: the need for & feasibility of a technological intervention to counteract climate change. Good for them.

But the geopolitical issues for their geoengineering proposal are also staggering. The UK -- or the US & UK -- can't possibly expect the world to stand by passively as they unilaterally implement technologies for self-consciously regulating the climate of the earth! Ain't gonna happen.

Thus, at the same time that natural scientists are applying their unique expertise to identify dramatic but technologically and ecomomically feasible strategies for ameliorating the risks we face, other experts are going to have apply their special knowledge and methods to steer us toward some pretty significant and dramatic breakthroughs in global governance. So we better get smart about that too -- about what's possible, about what sorts of things we should communicate, & how, on the need for appropriate kinds of coordination. Otherwise, the science that can help us deal w/ the problems we genuinely face will be wasted.... 

So sure, criticize the Gang of 32 for being naive, for lacking humility, for ironically not being very scientific in holding forth in this way (I'm sure a lot of political scientists are cringing too). But they are actually right in substance.

What their misadventure really illustrates is that enabling democratic societies to protect themselves from risk -- environmental ones, but lots of others too, e.g., those associated with terrorism and with infectious diseases -- demands the effective integration of natural science with the sciences of public administration and science communication. 

That's the message that science communicators like Chris Mooney are uniquely situated to help everyone get! So get to it, CM!

And of course, I mean just keep it up, since CM & many other of today's excellent science communicators clearly do get this!

Sunday
Mar182012

Two proposals from scientists on how to save the world: which is more realistic?

I don't want to say a lot about these -- just enough to stimulate reflection about the significance, the meanings of proposals so different.  Thus:

1. Which one of these proposals is more likely to "work"?

2. Which one is more "realistic"?

3. Who is likely to answer "proposal 1," who "proposal 2," to above questions -- & why?

4. If proposals like these are made a conspicuous part of public discussion, what effect is each likely to have on public perceptions of the risk of climate change and the importance of taking steps to address the risks that it poses?

Proposal 1: World Government

from Biermann, F., et al. Navigating the Anthropocene: Improving Earth System Governance. Science 335, 1306-1307 (2012):

Human societies must now change course and steer away from critical tipping points in the Earth system that might lead to rapid and irreversible change (3). This requires fundamental reorientation and restructuring of national and international institutions toward more effective Earth system governance and planetary stewardship.

... As a general conclusion, our work indicated that incremental change (6)—the main approach since the 1972 Stockholm Conference on the Human Environment—is no longer sufficient to bring about societal change at the level and with the speed needed to mitigate and adapt to Earth system transformation. Structural change in global governance is needed, both inside and outside the UN system and involving both public and private actors.

... Such a reform of the intergovernmental system—which is at the center of the 2012 Rio Conference—will not be the only level of societal change nor the only type of action that is needed toward sustainability. Changes in the behavior of citizens, new engagement of civil society organizations, and reorientation of the private sector toward a green economy, are all crucial to achieve progress. Yet, in order for local and national action to be effective, the global institutional framework must be supportive and well designed. We propose a first set of much-needed reforms for effective Earth system governance and planetary stewardship. The 2012 Rio Conference offers an opportunity and a crucial test of whether political will exists to bring about these urgently needed changes.

 

Proposal 2: Techno-fix 

from Richard Black, "Climate 'tech fixes urged for Arctic methane," BBC News Science & Environment, March 17, 2012:

An eminent UK engineer is suggesting building cloud-whitening towers in the Faroe Islands as a "technical fix" for warming across the Arctic.

Scientists told UK MPs this week that the possibility of a major methane release triggered by melting Arctic ice constitutes a "planetary emergency".

The Arctic could be sea-ice free each September within a few years.

Wave energy pioneer Stephen Salter has shown that pumping seawater sprays into the atmosphere could cool the planet.

The Edinburgh University academic has previously suggested whitening clouds using specially-built ships....

For each of the last four years, the September minimum has seen about two-thirds of the average cover for the years 1979-2000, which is used a baseline. The extent covered at other times of the year has also been shrinking.

What more concerns some scientists is the falling volume of ice.

Peter Wadhams, professor of ocean physics at Cambridge University, presented an analysis drawing on data and modelling from the PIOMAS ice volume project at the University of Washington in Seattle.

It suggests, he said, that Septembers could be ice-free within just a few years....

The field of implementing technical climate fixes, or geo-engineering, is full of controversy, and even those involved in researching the issue see it as a last-ditch option, a lot less desirable than constraining greenhouse gas emissions.

"Everybody working in geo-engineering hopes it won't be needed - but we fear it will be," said Prof Salter.

Depending on the size and location, Prof Salter said that in the order of 100 towers would be needed to counteract Arctic warming.

However, no funding is currently on the table for cloud-whitening. A proposal to build a prototype ship for about £20m found no takers, and currently development work is limited to the lab.

Saturday
Mar172012

Another video lecture: University of Minnesota, on psychology of "misinformation"

This is a video of the presentation that I discuss here (and supply slides for here). Topic is "psychology of misinformation," which I recently happened to do a blog post on (w/ focus on climate change) here

Friday
Mar162012

Cool book: Harré, Psychology for a Better World

Came across this cool book on using psychology to promote environment-friendly behavior. 

Some of the things that make it cool:

1. It presents behaviorally realistic synthesis of social norms, emotions, & reciprocity, on the one hand, and mechanisms of risk perception/cognition, on the other. 

2. It strikes a nice balance between exposition/analysis and programmatic advice.

3. It is well written & draws on lots of interesting sources.

4. The author is distributing .pdf version for free -- a gesture that provokes motive to reciprocate by producing and sharing knowledge in turn (a big theme of the book is the potential of pro-social behavior to reproduce itself by furnishing an inspiring model). 

Thursday
Mar152012

Scientists of science communication Profile #3: Ellen Peters

This is 2d installment in this series (actually, I'm negotiating w/ several companies that saw the last post & want to produce "Scientists of science communication trading cards"!)

 3. Ellen Peters.

Peters, a social psychologist at the Ohio State University, is a leading scholar of risk perception. A(nother) student of Paul Slovic, Peters's specialty (I'd say) is detecting how diverse cognitive mechanisms relate to one another.  E.g., she has done important studies establishing that "affect"--itself (Slovic and others show) a central element of myriad risk-perception heuristics--is mediator of cultural worldviews, which determine the valence (positive or negative) of affective responses, thereby generating individual differences in risk perception.

Recently, Peters has been engaged in pathbreaking work on numeracywhich refers to the capacity (disposition, really) to make sense of quantitative information and engage in quantitative reasoning. The important -- indeed, startling -- insight of her work there is that numeracy and affect are complimentary mental processes. That is, affect, rather than being a heuristic substitute for numeracy, is in fact a perceptive faculty calibrated by, and integral to the employment of quantitative reasoning. High numeracy individuals, her experiments show, do not rely on affect less than low numeracy ones but rather experience it in a more reliably discerning fashion when evaluating the expected value of opportunities for gain and loss. Numeracy, it would appear, effectively "trains" affect, which thereafter operates as an efficient scout, telling a person when he or she should engage in more effortful quantitative processing; people low in numeracy are distinguished not by greater reliance on affect, but by inchoate, confused affect.

This is a very different picture, I'd say, from the (now) dominant "system 1/system 2" conception of dual process reasoning. That framework envisions a discrete and hierarchical relationship between unconscious, affective forms of reasoning (System 1) and conscious, algorithmic ones (System 2). Peters's work, in contrast, suggests that affect and numeracy are integrated and reciprocal--that each operates on the other and that together they make complimentary contributions to sound decisionmaking.

Interestingly, though, people with high numeracy can also experience distinctive kinds of biasE.g., they will rate transactions that offer a high probability of substantial gain versus a low probability of a small loss as more attractive than transactions that offer a high probability of substantial gain versus a small probability of an outcome involving no change (positive or negative) in welfare. The reason is that the contrast between a high probability of gain and small probability of loss is more affectively arousing than the contrast between high probability of gain and nothing. But you actually have to be pretty good with numbers to receive this false affective signal! In other words, there are some kinds of attractive specious inferences that presuppose fairly high quantitative reasoning capacity.

Some key readings:

1. Peters, E. The Functions of Affect in the Construction of Preferences. in The construction of preference (eds. Lichtenstein, S. & Slovic, P.) 454-463 (Cambridge University Press, Cambridge ; New York, 2006).

2. Peters, E., Dieckmann, N., Västfjäll, D., Mertz, C.K. & Slovic, P. Bringing meaning to numbers: The impact of evaluative categories on decisions. Journal of Experimental Psychology: Applied 15, 213-227 (2009).
 
3. Peters, E. & Levin, I.P. Dissecting the risky-choice framing effect: Numeracy as an individual-difference factor in weighting risky and riskless options. Judgment and Decision Making 3, 435-448 (2008).

 

4. Peters, E., Slovic, P. & Gregory, R. The role of affect in the WTA/WTP disparity. Journal of Behavioral Decision Making 16, 309-330 (2003).

5. Peters, E., et al. Intuitive numbers guide decisions. Judgment and Decision Making 3, 619-635 (2008).

6. Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

7. Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).

8. Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

 9. Slovic, P. & Peters, E. The importance of worldviews in risk perception Risk Decision and Policy 3, 165-170 (1998).

10. Peters, E. & Slovic, P. Affective asynchrony and the measurement of the affective attitude component. Cognition Emotion 21, 300-329 (2007).

 

Friday
Mar092012

Cognitive illiberalism: anatomy of a bias

That's the title of a talk I gave today at Arizona State Law School & yesterday at the University of Arizona Law School.

The talk, which I gave to faculty-workshop audiences who had read They Saw a Protest, first offers an analytically precise account of how cultural cognition can defeat Bayesian updating. It then identifies how this form of cognitive decisionmaking bias generates "cognitive illiberalism," a legal and political decisionmaking bias that poses the same threat to constitutional freedoms as consciously illiberal forms of state action.

Probably will write this up as short paper. For now--slides here.  

Thursday
Mar082012

Misinformation and climate change conflict

reposted from Talkingclimate.org

I’m going to resist the academic’s instinct to start with a long, abstract discussion of "cultural cognition' and the theory behind it. Instead, I’m going to launch straight into a practical argument based on this line of research. My hope is that the argument will give you a glimpse of the essentials—and an appetite for delving further.

The argument has to do with the contribution that misinformation makes to the dispute over climate change. I want to suggest that the normal account of this is wrong.

The normal account envisions, in effect, that the dispute is fueled by an external force—economic interest groups, say—inundating a credulous public with inaccurate claims about risk.

I would turn this account more or less on its head: the climate change dispute, I want to argue, is fueled by a motivated public whose (unconscious) desire to form certain perceptions of risk makes it possible (and profitable) to misinform them.

As evidence, consider an experiment that my colleagues at the Cultural Cognition Project and I did.

In it, we asked  the participants (a representative sample of 1500 U.S. adults) to examine the credentials of three scientists and tell us whether they were “knowledgeable and credible expertsabout one or another risk—including climate change, disposal of nuclear wastes, and laws allowing citizens to carry concealed weapons in public. Each of the scientists (they were fictional; we told subjects that after the study) had a Ph.D. in a seemingly relevant field, was on the faculty of an elite university, and was identified as a member of the National Academy of Sciences.Whether study subjects deemed the featured scientists to be “experts,” it turned out, was strongly predicted by two things: the position we attributed to the scientists (in short book excerpts); and the cultural group membership of the subject making the determination.

Where the featured scientist was depicted as taking what we called the “high risk” position on climate change (it’s happening, is caused by humans, will have bad consequences, etc.) he was readily credited as an “expert” by subjects with egalitarian and communitarian cultural values, a group that generally sees environmental risks as high, but not by subjects with hierarchical and individualistic values, a group that generally sees environmental risks as low. However, the positions of these groups shifted—hierarchical individualists more readily saw the same scientist as an “expert,” while egalitarian comuniatarians did not—when he was depicted as taking a “low risk” position (climate change is uncertain, models are unreliable, more research necessary).

The same thing, moreover, happened with respect to the scientists who had written books about nuclear power and about gun control: subjects were much more likely to deem the scientist an “expert” when he advanced the risk position that predominated in the subjects’ respective cultural groups than when he took the contrary position.

This result reflects a phenomenon known as “motivated cognition.” People are said to be displaying this bias when they unconsciously fit their understandings of information (whether scientific data, arguments, and even sense impressions) to some goal or end extrinsic to forming an accurate answer.

The interest or goal here was the stake study subjects had in maintaining a sense of connection and solidarity with their cultural groups. Hence, the label cultural cognition, which refers to the tendency of individuals to form perceptions of risk that promote the status of their groups and their own standing within them.

Cultural cognition generates my unconventional “motivated public” model of misinformation. The subjects in our study weren’t pushed around by any external misinformation provider. Furnished the same information, they sorted themselves into the patterns that characterize public divisions we see on climate change.

This kind of self-generated biased sampling—the tendency to count a scientist as an “expert” when he takes the position that fits one’s group values but not otherwise—would over time be capable all by itself of generating a state of radical cultural polarization over what “expert scientific consensus” is on issues like climate change, nuclear power, and gun control.

In this environment, does the deliberate furnishing of misinformation add anything? Certainly.

But the desire of the public to form culturally congenial beliefs supplies one of the main incentives to furnishing them with misleading information. To protect their cultural identities, individuals more readily seek out information that supports than that challenges the beliefs that predominate in their group. The motivated public’s desire for misinformation thus makes it profitable to become a professional misinformer—whether in the media or in the world of public advocacy.

Other actors will have their own economic interest in furnishing misinformation. How effective their efforts will be, however, will still depend largely on how culturally motivated people are to accept their message. If this weren’t so, the impact of the prodigious efforts of commercial entities to convince people that climate change is a hoax, that nuclear power is safe, and that concealed-carry laws reduce crime would wear away the cultural divisions on these issues.

The reason that individuals with different values are motivated to form opposing positions on these issues is the symbolic association of them with competing groups.  But that association can be created just as readily by accurate information as by misinformation if authority figures identified with only one group end up playing a disproportionate role in communicating it.

One can’t expect to win an “information war of attrition” in an environment like this. Accurate information will simply bounce off the side that is motivated to resist it.

So am I saying, then, that things are hopeless? No, far from it.

But the only way to devise remedies for these pathologies is to start with an accurate understanding of why they occur. 

The study of cultural cognition shows that the conventional view of misinformation (external source, credulous public) is inaccurate because it fails to appreciate how much more likely misinformation is to occur and to matter when scientific knowledge becomes entangled in antagonistic cultural meanings.

How to free science from such entanglements is something that the study of cultural cognition can help us to figure out too. 

I hope  you are now interested in knowing how -- and in just knowing more!

Sources:

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) 725-760 (Springer London, Limited, 2012).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. & Braman, D. Cultural Cognition of Public Policy. Yale J. L. & Pub. Pol'y 24, 147-170 (2006).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

 

 

Saturday
Mar032012

Does economic self-interest explain climate change skepticism?

Nope.

First, some common sense:

Let's assume self-interest explains the formation of beliefs about climate change by ordinary members of the public (I'm very happy to do that). In that case, we should expect the economic impact of climate change & proposed climate change policies on the public's perception of climate change risks to be 0.00, and the impact of cultural identity to be [some arbitrarily large number].

What the ordinary member of the public believes about climate change won't have any impact on the threat it poses to the environment or on the policies society adopts to repel that threat. The same is true about how he or she votes in democratic elections or behaves as a consumer. As an individual, he or she just isn't consequential enough to matter. 

Accordingly, there is no reason to expect much if any correlation between, say, economic class, etc., and climate change risk perception.

In contrast, what an ordinary individual believes and says about climate change can have a huge impact on her interactions with her peers. If a professor on the faculty of a liberal university in Cambridge Massachusetts starts saying "cliamte change is ridiculous," he or she can count on being ostracized and vilified by others in the academic community. If the barber in some town in South Carolina's 4th congressional district insists to his  friends & neighbors that they really should believe the NAS on climate change, he will probably find himself twiddling his thumbs rather than cutting hair.

It's in people's self-interest to form beliefs that connect rather than estrange them from those whose good opinion they depend on (economically, emotionally, and otherwise).  As a result, we should expect individuals' cultural outlooks to have a very substantial impact on their climate change risk perceptions.

(For elaboration of this argument, see CCP working paper No. 89, Tragedy of the Risk Perceptions Commons.)

Second, some data:

I have constructed some regression models to examine the impact of household income (hh_income) and cultural worldviews (hfac for hierarchy and ifac for individualism) on climate change risk perceptions (z_GWRISK; for explanation of that measure, see here).  The data come from a nationally representative survey of 1500 US adults conducted by the Cultural Cognition Project with a grant from the National Science Foundation. To see the regression outputs, click on the thumbnail to the right.

The analyses show, first, that differences in income have a very small negative impact on climate change risk perceptions (B = -0.07, p < 0.01) when consdired on its own (model 1).

Second, the analyses show that cultural worldviews have a very large impact -- a typical egalitarian communitarian and a typical hierarchical individualist are separated by about 1.6 standard deviations on the risk perception measure -- controlling for income (model 2). When cultural worldviews are controlled for,  income turns out to have an effect that is practically nil (B = -0.02, p = 0.56).

But wait: the third thing the analyses show is that income does have a modest effect -- one that is conditional on survey respondents' cultural worldviews. As they become wealthier, egalitarian communitarians become slightly more concerned about climate change, while hierarchical individualists become less (Model 3).

Bottom line: economic self-interest doesn't matter; cultural identity self-interest does.

Friday
Mar022012

Coolest debiasing study (I've) ever (read)

So this is another installment (only second; first here) in my series on cool studies that we read in my fall Law & Cognition seminar at HLS.

This one, Sommers, S.R. On Racial Diversity and Group Decision Making: Identifying Multiple Effects of Racial Composition on Jury Deliberations. Journal of Personality and Social Psychology 90, 597-612 (2006), looked at the impact of the racial composition of a (mock) jury panel on white jurors. Sommers found that white jurors on mixed African-American/white panels were more likely than ones on all-white to form pro-defendant fact perceptions and to support acquittal in a case involving an African-American defendant charged with sexual assault of a white victim.

That's plenty interesting -- but the really amazing part is that these effects were not a product of any exchange of views between the white and African-American jurors in deliberations. Rather they were a product of mental operations wholly internal to the white subjects.

There were two sorts of evidence for this conclusion. First, Sommers found that the pre-deliberation verdict preferences of the white subjects on the mixed juries was already more pro-defense than the preferences of those on the all-white juries. Second, during deliberations the white subjects on the mixed juries were more likely to mention pro-defendant evidence spontaneously (that is, on their own, without prompting the by African-American ones) and less likely to inject mistaken depictions of the evidence into discussion.

In sum, just knowing that they would be deliberating with African-American jurors influenced -- indeed, demonstrably improved the quality of -- the cognition of the white jurors.

How many cool things are going on here? Lots, but here are some that really register with me:

1. OCTUSW ("Of course--that's unsurprising--so what") response is DOA & No ITMBSWILESP, either!

OCTUSW is a predictable, lame response to a lot of cool social science studies. What makes it lame is that it is common to investigate phenomena for which there are plausible competing hypotheses; indeed, the clash of competing plausible hypotheses is often what motivates people to investigate. This is one of the key points in Duncan Watt's great book, Everything Is Obvious Once You Know the Answer.

But here the result was a real surprise (to me and my students, at least) -- so we can just skip the 10 mins it usually takes to shut the (inevitably pompous & self-important) OCTUSW guy up.

At the same time, the result isn't insane-there-must-be-something-wrong-it's-like-ESP (ITMBSWILESP), either. ITMBSWILESP results can take up 30-40-50 mins & leave everyone completely uncertain whether (if they decide the study is valid, reliable) they've been duped by the researcher or (if they dismiss it out of hand) they've been taken in by their own vulnerability to confirmation bias.

2. Super compelling evidence that unconscious bias is defeating moral commitments of those experiencing it.

The results in this study suggest that the white subjects on the all-white juries were displaying a lower quality of cognitive engagement with the evidence than the whites on the mixed-race juries. Why?

The most straightforward explanation (and the animating conjecture behind the study) was that the racial composition of the jury interacted with unconscious racial bias or "implicit social cognition." Perhaps they were conforming their view of the evidence to priors founded on the correlation between race and criminality or were failing to experience a kind of investment in the interest of the defendant that would have focused their attention more effectively. 

Knowing, in contrast, that they were on a jury with African Americans, and would be discussing the case with them after considering the evidence, jolted the whites on the mixed juries into paying greater attention, likely becuause of anxiety that mistakes would convey to the African-American subjects that they didn't care very much about the possibility an African-American was being falsely accused of an interracial sexual assault. Because they paid more attention, they in fact formed a more accurate view of the facts.

But this "debiasing" effect would not have occurred unless the unconscious racial bias it dispelled was contrary to the white subjects' conscious, higher-order commitment to deciding the case impartially.

Obviously, if the white subjects in the study were committed, conscious racists, then those who served on the mixed-race juries would have gotten just as much satisfaction from forming anti-defendant verdict preferences and inaccurate, anti-defendant fact perceptions as ones on the all-white juries.

Likewise, it is not very plausible to think the whites on the mixed-race juries would have been jolted into paying more attention unless they had a genuine commitment to racial impartiality. Otherwise, why would the prospect that they'd be perceived otherwise have been something that triggered an attention-focusing level of anxiety?

The conclusion I draw, then, is that the effect of unconscious bias on the jurors in the all-white juries is something that they themselves would likely have been disappointed by.  They and others in their position would thus concur in, and not resent, the use of procedures that reduce the likelihood that this cognitive dynamic will affect them as they perform that decisionmaking task.

That’s a concluison, too, that really heartens me.

My own research on “debiasing” cultural cognition rests on the premise that identity-protective cognition (a cousin of implicit social cognition) disappoints normative commitments that ordinary citizens have. If that’s not true--if in fact, individuals would rather be guided reliably to conclusions that fit the position of “their team” than be right when they are evaluating disputed evidence on issues like climate change and the effectiveness of the HPV vaccine — then what  I’m up to is either pointless or (worse) a self-deluded contribution to public manipulation.

So when I see a study like this, I feel a sense of relief as well as hope!

 3. The debiasing effect can't be attributed to any sort of "demand effect."

This is a related point. A "demand effect" describes a result that is attributable to the motives of the subjects to please the researcher rather than to the cognitive mechanism that the researcher is trying to test.

One common strategy that sometimes is held forth as counteracting motivated cognition -- explicitly telling subjects to "consider the opposite" -- is very vulnerable to this interpretation. (Indeed, studies that look at the effect of explicit "don't be biased" instructions report highly variable results.)

But here there's really no plausible worry about "demand effect." The whites on the mixed-race juries couldn't have been "trying harder" to make the researchers happy: they had no idea that their perceptions were being compared to subjects on all-white juries, much less that those jurors were failing to engage in the evidence in as careful a way as anyone might have wanted them to.

4. The effect in this study furnishes a highly suggestive model that can spawn hypotheses and study designs in related areas.

Precisely because it seems unlikely to me that simply admonishing individuals to be "impartial" or "objective" can do much real good, the project to identify devices that trigger effective unconscious counterwights to identity-protective cognition strikes me as of tremendous importance.

We have done a variety of studies of this sort. Mainly they have focused on devices -- e.g., message framings, and source credibility -- that neutralize the kinds of culturally threatening meanings that provoke defensive resistance to sound information.

The debiasing effect here involves a different dynamic. Again, as I understand it, the simple awareness that there were African-Americans on their jury activated white jurors' own commitment to equality, thereby leading them to recruit cognitive resources that in fact promoted that commitment.

Generalizing, then, this is to me an example of how effective environmental cues (as it were) can activate unconscious processes that tie cognition more reliably to ends that individuals, at least in the decisionmaking context at hand, value more than partisan group allegiances. 

Seeing the study this way, I now often find myself reflecting on what sorts of cues might have analogous effect in cultural cognition settings.

That's something cool studies predictably do. They not only improve understanding of the phenomena they themselves investigated. They also supply curious people with vivid, generative models that help them to imagine how they might learn, and teach others something, too. 

Thursday
Mar012012

Is the "culture war" over for guns?

One of the students in my HLS criminal law class drew my (and his classmates') attention to this poll showing that a pretty solid majority (73%) of Americans now oppose banning handguns. What caused this? Did the Supreme Court's 2nd Amendment opinions (Heller and MacDonald) change norms? Or induce massive cognitive dissonance avoidance? Or maybe the NRA is behind the new consensus? Or maybe the public finally learned of the scientific consensus that there's no reliable evidence that concealed-carry laws have any impact on crime one way or the other? Is there a model here to follow for ending the culture war on climate change? Or maybe the climate change battle just made people forget this one?

Saturday
Feb252012

More evidence that good explanations of climate change conflict are not depressing

I explained recently (here & here)  why it is a mistake to conclude that cultural cognition implies that trying to resolve the climate change conflict is "futile" (not to mention a fallacious reason for rejecting the evidence that the cultural cognition explains the conflict).

Today I came across a great paper that extends the theme "good social science explanations of climate change conflict are not depressing":

Law, Environment, and the 'Non-Dismal' Social Sciences

U of Colorado Law Legal Studies Research Paper No. 12-01 

Boyd, William, Univ. Colorado Law School
Kysar, Douglas A., Yale Law School
Rachlinski, Jeffrey J., Cornell Law School 


Abstract:   Over the past 30 years, the influence of economics over environmental law and policy has expanded considerably. Whereas politicians and commentators once seriously questioned whether tradable emissions permits confer a morally illicit “right to pollute,” today even environmental advocacy organizations speak freely and predominantly in terms of market instruments and economic efficiency when they address climate change and other pressing environmental concerns. This review seeks to counterbalance the expansion of economic reasoning and methodology within environmental law and policy by highlighting insights to be gleaned from various “non-dismal” social sciences. In particular, three areas of inquiry are highlighted as illustrative of interdisciplinary work that might help to complement law and economics and, in some cases, compensate for it: the study of how human individuals perceive, judge, and decide; the observation and interpretation of how knowledge schemes are created, used, and regulated; and the analysis of how states and other actors coordinate through international and global regulatory regimes. The hope is to provide some examples of how environmental law and policy can be improved by deeper and more diverse engagement with social science and to highlight avenues for future research.

Boyd, William, Univ. Colorado Law SchoolKysar, Douglas A., Yale Law SchoolRachlinski, Jeffrey J., Cornell Law School 

Wednesday
Feb222012

Climate change & the media: what's the story? (Answer: expressive rationality)

Max Boykoff has written a cool book (material from which played a major role in a panel session at the 2012 Ocean Sciences conference) examining media coverage of climate change in the U.S. 

Who Speaks for the Climate? documents in a more rigorous and informative way than anything I've ever read the conservation of "balance" in the media coverage of the climate change debate no matter how lopsided the scientific evidence becomes.

Boykoff's own take -- and that of pretty much everyone I've heard comment on this phenomenon -- is negative: there is something wrong w/ norms of science journalism or the media generally if scientifically weak arguments are given just as much space & otherwise treated just as seriously as strong ones.

I have a slightly different view: "balanced" coverage is evidence of the expressive rationality of public opinion on climate change.

News media don't have complete freedom to cover whatever they want, however they want to. Newspapers and other news-reporting entities are commercial enterprises. To survive, they must cover the stories that people want to read about.

What people want to read are stories containing information relevant to their personal lives. Accordingly, one can expect newspapers to cover the aspect of the "climate change story" that is most consequential for the well-being of their individual readers.

The aspect of the climate change story that's most consequential for ordinary members of the public is that there's a bitter, persistent, culturally polarized debate over it. Knowing that has a much bigger impact on ordinary individuals than knowing what the science is.

Nothing an individual thinks about climate change will affect the level of risk that climate change poses for him or her. That individual's behavior as consumer, voter, public discussant, etc., is just too small to have any impact --either on how carbon emissions affect the environment or on what governments do in response. 

However, the position an individual takes on climate change can have a huge impact on that' person's individual social standing within within his or her community.  A university professor in New Haven CT or Cambridge Mass. will be derisively laughed at and then shunned if he or she starts marching around campus with a sign saying "climate change is a hoax!" Same goes for someone in a mirror image hierarchical-individualistic community (say, a tobacco farmer living somewhere in South Carolina's 4th congressional district) who insists to his  friends & neighbors, "no, really, I've looked closely at the science -- the ice caps are melting because of what human beings are doing to the environment." 

In other words, it's costless for ordinary individuals to take a positon that is at odds with climate science, but costly to take one that has a culturally hostile meaning within groups whose support (material, emotional & otherwise) they depend on.

Predictably, then, individuals tend to pay a lot of attention to whatever cues are out there that can help them identify what cultural meanings (if any) a disputed risk or related fact issue conveys, and to expend a lot of cognitive effort (much of it nonconscious) to form beliefs that avoid estranging them their communities.

Predictably, too, the media, being responsive to market forces, will devote a lot more time and effort to reporting information that is relevant to identifying the cultural meaning of climate change than to information relevant to determining the weight or the details of scientific evidence on this issue.

So my take on Boykoff's evidence is different from his.

But it is still negative.

It might be individually rational for people to fit their perceptions of climate change and other societal risks to the positions that predominate in their communities but it is nevertheless collectively irrational for them all to form their beliefs this way simultaneously: the more impelled culturally diverse individuals are to form group-congruent beliefs rather than truth-congruent ones, the less likely democratic institutions are to form policies that succeed in securing their common welfare.

The answer, however, isn't to try to change the norms of the media. They will inevitably cover the story that matters to us.

What we need to do, then, is change the story on climate change. We need to create new meanings for climate change that liberate science from the antagonistic ones  that now make taking the "wrong" position (any position) tantamount to cultural treason.