follow CCP

Recent blog entries
Monday
May212012

NAS says: Listen to the science of science communication

National Academy of Science President Ralph Cicerone (foreground) & Nobelist Daniel Kahneman during the Q&A that followed Kahneman's (outstanding) lecture.

This picture really captures it, I think.

The NAS's Science of Science Communication Sackler Colloquium is modeling what the practice of science & science-informed policymaking needs to do: start listening to the science of science communication, the foundational insights of which reflect the work of Kahneman (and Amos Tversky, Paul Slovic & Baruch Fischhoff, among others) on risk perception.

I feel very optimistic today!

 

Sunday
May202012

Protecting the science communication environment: sneak preview

 

Am embarking soon (was supposed to already; small travel misadventure) for NAS Science of Science Communication colloquium. Attached are slides that I'm sending my co-panelists & commentators (I think they'd like a text but I don't speak from one, or use notes, when doing a talk).

Probably will have to shrink it -- so maybe this is "director's cut" as well as "sneak peek."

 

But if you have time on your hands, tune in (my talk is Tues. @3:15; agenda for event here).

Thursday
May172012

The science of protecting the science communication environment

Am giving a talk on Tuesday at the NAS's Sackler Colloquium on the Science of Science Communication. Was asked to submit an "exeuctive summary" for the benefit of commenters. This is it: 

The Science of Science Communication and Protecting the Science Communication Environment

Promoting public comprehension of science is only one aim of the science of science communication and is likely not the most important one for the well-being of a democratic society. Ordinary citizens form quadrillions of correct beliefs on matters that turn on complicated scientific principles they cannot even identify much less understand. The reason they fail to converge on beliefs consistent with scientific evidence on certain other consequential matters—from climate change to genetically modified foods to compusory adolescent HPV vaccination—is not the failure of scientists or science communicators to speak clearly or the inability of ordinary citizens to understand what they are saying. Rather, the source of such conflict is the proliferation of antagonistic cultural meanings. When they become attached to particular facts that admit of scientific investigation, these meanings are a kind of pollution of the science communication environment that disables the faculties ordinary citizens use to reliably absorb collective knowledge from their everyday interactions. The quality of the science communication environment is thus just as critical for enlightened self-government as the quality of the natural environment is for the physical health and well-being of a society’s members. Understanding how this science communication environment works, fashioning procedures to prevent it from becoming contaminated with antagonistic meanings, and formulating effective interventions to detoxify it when protective strategies fail—those are the most critical functions science communication can perform in a democratic society.

In my remarks, I will elaborate on this conception of the science of science communication. I will likely illustrate my remarks with reference to findings on formation of HPV-vaccine risk perceptions, culturally biased assimilation of evidence of scientific consensus, the polarizing impact of science literacy and numeracy on climate change risk perceptions, and experimental forecasting of emerging-technology risk perceptions.  I’ll also describe the necessity of public provisioning to assure the quality of the science communication environment, which like the quality of the physical environment is a collective good that is unlikely to be secured by spontaneous private ordering.

If any of the other panelists would like to form a more vivid impression of my remarks, they might consider taking a look at:

1. Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010); and

2. Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Wednesday
May162012

Is Cultural Cognition Culture-Specific? 

Is cultural cognition culturally specific?  

I just read a great piece over on the PLoS Blog about the cultural specificity of many purportedly universal psychological biases / mechanisms.  As an example, the blog uses the famous Müller-Lyer Illusion.  You probably know of it.  In the image below, many people see the line on the right as longer than the one on the left.  

For almost a hundred years, social psychologists thought this a universal illusion.  It turns out, though, that this illusion is actually acute only in those who live in modern urban environments -- environments where straight lines, flat sides, and sharp corners are common.  When, in 1966, Marshall H. Segall conducted a study across cultural groups, he found tremendous variation (as illustrated in the graph below). 

For folks who are interested in the phenomenon of cultural cognition, this raises an interesting question: Is cultural cogntion itself culture-bound?  The answer, I think, is either "probably yes" or "probably no" depending on what is meant by "culture-bound".  

The "probably yes" answer obtains if one were to try to use the same value measures across highly distinct cultural groups.  There is no reason to believe that San foragers or the Fang are divided over the questions that comprise the cultural value measures we use to distinguish US subjects from one another.  It wouldn't make sense (at least without more evidence) for us to presume our measures are universal.  

But that isn't really what the PLoS Blog post is about.  It asks whether the underlying phenomenon itself is generalizable.  One could broaden the way that such illusions are characterized in order to account for visual training and local adaptions.  Do people see view depth-cues that are relevant to their conceptual contexts.  The newly recast "local cues for depth perception" bias could still plausibly be universal. 

The phenomenon of cultural cognition, I would argue, is closer to the latter than the former.  It is one in which people develop factual beliefs that support or are consistent with their preferred social orderings (typically with the life-ways and values of their in-groups given high status).  If viewed this way, the answer is "probably no" because the theory derives from observations by anthropologists across many different cultural groups.  (I can't say "definitively no" or even "almost certainly no" since we haven't done extensive work across these non-Western cultural groups ourselves.)  More recently, a more general form of this has been studied as "motivated cognition" by social psychologists.  For cultural cognition as a general concept to be culture-bound, the phenomenon of motivated cognition itself would have to be culture-bound.  And, because the idea of motivated cognition is something that we use to describe differences in belief-formation across cultures, it would be very hard to construe it as culture-bound as well.  

But then again, it may be that my sample is too limited -- indeed motivated cognition would suggest that I would be particularly motivated to not notice contrary evidence! Perhaps it just seems obvious to me that the everyone sees the world as shorter or longer as befits there preferred social order when, in fact, there are some groups who do not.  

But one thing we can be fairly certain of: these groups would have to be very distinct from the main groups involved in various forms of culture wars in the United States.  As Dan has pointed out in numerous posts at this point, there is very strong evidence that whatever cultural groups might be immune to cultural cognition, they are not the cultural groups who are involved in popular political debates in this country.  Your cultural adversary may fall foul of cultural cognition, but the fact that you have cultural adversary suggests that you are just as likely to yourself. 

Tuesday
May152012

Wild wild horses couldn't drag me away: four "principles" for science communication and policymaking

Was invited to give a presentation on "effective science communication" for the National Academy of Sciences/National Research Council committee charged with preparing a report on wild horse & burro population management.

I happily accepted, for two reasons.

First, it really heartens and thrills me that the NAS gets the importance of integrating science and policymaking, on the one hand, with the science of science communication on the other. Indeed, as the NAS's upcoming Sackler Colloquium on the Science of Science Communication attests, NAS is leading the way here. 

Second, it only took me about 5 minutes of conversation with Kara Laney, the NAS Program Officer who is organizing the NRC committee's investigation of wild horse population management, to persuade me that the science communication dimension of this issue is fascinating. The day I spent at the committee's meeting yesterday corroborated that judgment.

Not knowing anything about the specifics of wild-horse population management (aside from what everyone picks up just from personal experience & anecdote, etc), I confined myself to addressing research on the "science communication problem" -- the failure of ample and widely disseminated science to quiet public dispute over policy-relevant facts that admit of scientific investigation. Like debates over climate change, HPV vacccination, nuclear power, etc.,  the dispute over wild-horse management falls squarely into that category.

After summarizing some illustrative findings (e.g., on the biasing impact of cultural outlooks on perceptions of scientific consensus; click on image for slides), I offered "four principles":

First, science communication is a science.

Seems obvious--especially after someone walks you through 3 or 4 experiments -- but in fact, the assumption that sound science communicates itself is the origin of messes like the one over climate change. As I said, NAS is now committed to remedying the destructive consquences of this attitude, but one can't overemphasize how foolish it is to invest so much in policy-relevant science and then adopt a wholly ad hoc anti-scientific stance toward the dissemination of it.

Second, "science communication" is not one thing; it's 5 (± 2).

Until recent times, those who thought systematically about science communication were interetested either in helping scientists learn to speak in terms intelligible to curious members of the public or in training science journalists to understand and accurately decipher scientists' unintelligible pronouncements.

These are important things. But the idea that inarticulate scientists or bad journalists caused the climate change controversy, say, or that making scientists or journalists better communicators will solve that or other problems involving science and democratic decsionmaking is actually a remnant of the unscientific concepion of science communication-- a vestiges, really, of the idea that "facts speak for themselves," just so long as they are idiomatic, grammatical, etc.

As I explained in my talk, the disputes over climate change, the HPV vaccine, nuclear power, and gun control are not a consequence of a lack of clarity in science or a lack of science comprehension on the part of ordinary citizens.

The source of those controversies is a form of pollution in the science communication environment: antagonistic social meanings that get attached to facts and that interfere with the normally reliable capacity of ordinary people to figure out what's known (usually by identifying who knows what about what).  

Detoxifying the science communication environment and protecting it from becoming contaminated in the first place is thus another kind of "science communication," one that has very little to do with helping scientists learn to avoid professional jargon when they give interviews to journalists, who themselves have been taught how to satisfy the interest that curious citizens have to participate in the thrill and wonder of our collective intelligence.

Those two kinds of science communication, moreover, are different from the sort that an expert like a doctor or a finanancial planner has to engage in to help individuals make good decisions about their own lives. The emerging scientific insights on graphic presentation of data etc. also won't help fix problems like ones about climate change.

Still another form of science communication is the sort that is necessary to enable policymakers to make reliable and informed decisions under conditions of uncertainty. The NAS is taking the lead on this too -- and isn't laboring under the misimpression that what causes climate change is the "same thing" that has made judges accept finger prints and other bogus forms of forensic proof.

Finally, there is stakeholder science communication -- the transmission of knowledge to ordinary citizens who are intimately affected by and who have (or are at least entiled to have) a say in collective decisionmaking. That's mainly what the decisionmaking process surrounbding the wild-horse population is about.  There are scientific insights there, too-- ones having very little to do with graphic presentation of data  or with good writing skills or with the sort of pollution problem that is responsible for climate change.

Third, "don't ask what science communication can do for you; ask what you can do for science communication."

Having just told the committee that their "science communication problem" is one distinct from four others, I anticipated what I was sure would be their next question: "so what do we do?" 

Not surprisingly, that's what practical people assigned to communicate always ask when they are engaging scholars who use scientific methods to study science communication. They want some "practical" advice--directions, instructions, guidelines.

My answer is that they actually shouldn't be asking me or any other science-communication researcher for "how to" advice. And that they should be really really really suspicious of any social scientist who purports to give it to them; odds are that person has no idea what he or she is talking about.

Those who study science communication scientifically know something important and consequential, I'm convinced, about general dynamics of risk perception and science communication. But we know that only because we have investigated these matters in controlled laboratory environments-- ones that abstract from real-world details that defy experimental control and confound interpretation of observations.

Studies, in other words, are models. They enable insight that one couldn't reliably extract from the cacophony of real-world influences. Those insights, moreover, have very important real-world implications once extracted. But they do not themselves generate real-world communication materials.

The social scientists who don't admit this usually end up offering banalities, like "Know your audience." 

That sort of advice is based on real, and really important, psychological research. But it's pretty close to empty precisely because it's (completely) devoid of any knowledge of the particulars of the communication context at hand (like what characteristics genuinely define the "audience" that is to be known, and what there actually is to "know" about it).

The practical communicators -- the ones asking to be told what to do -- are the people who have that knowledge. So they are the ones who have to use judgment to translate the general insights into real-world communication materials.  

Experimentalists are not furnishing communicators with "shovel ready" construction plans. Rather they are supplying the communicators with reliable maps that tell them where they should dig and build through their own practical experimentation.

Once that process of experimental adaptation starts, moreover, the social scientist should then again do what she knows how to do: measure things.

She should be on hand to collect data and find out which sorts of real-world applications of knowledge extracted in the lab are actually working and which ones aren't. She can then share that new knowledge with more people who have practical knowledge about other settings that demand intelligent science communication -- and the process can be repeated.

And so forth and so on. Until what comes out is not a "how to" pamphlet but a genuine, evolving repository filled with vivid case studies, protocols, data collection and analysis tools and the like.

If you ask me for a facile check list of do's & don'ts, I won't give it to you.

Instead, I'll stick a baton of reliable information in your hand, so you run the next lap in the advancement of our knowledge of how to communicate science in a democracy. I'll even time you!

Fourth, science communication is a public good.

Clean air and water confer benefits independent of individuals' contributions too them. Indeed, individuals' personal contributions to clean air and water tend not to benefit them at all -- it's what others, en masse, are doing that determines whether the air and water are clean.

Same thing with the science communication environment. We all benefit when ordinary citizens form accurate judgments about what the best evidence is on issues like climate change. Accordingly, we all benefit when we live in an information environment free of toxic social meanings. But the judgments any ordinary person forms, and the behavior he or she engages in that amplify or mute toxic meanings -- those have zero impact on him or her.

As a result, he or she and every other individual like him or her won't have sufficient incentive to contribute. There has to be collective provisioning of such goods.

We need government policy for protection of the science communication environment every bit as much we need it to protect the physical environment.

There's an importnat role for key entities in civil society too -- like universities and foundations.

NAS is modeling the active, collective provisioning of this good.  Many others must now follow its lead!

Sunday
May062012

Some data on CRT & "Republican" & "Democratic brains" (plus CRT & religion, gender, education & cultural worldviews)

This is the latest in a series of posts (see here, here, here, here ...) on the relationship between ideology &/or cultural worldviews, on the one hand, and cognitive reasoning dispositions, on the other.

I've now got some new data that speak to this question -- & that say things inconsistent with the increasingly prominent claim that conservative ideology is associated with low-level information processing.

If you already know all about the issue, just skip ahead to "2. New data"; if you are new to the issue or want a brief refresher, read "1. Background" first.

1. Background

As discussed in a recent post, a series of studies have come out recently that present evidence--observational and (interestingly!) experimental--showing that the tendency to use heuristic or system 1 information processing ("fast" in Kahneman terms, as opposed to "slow" systematic or system 2) is associated with religiosity.

I expressed some agitation on the absence of reported data on the relationship of system 1/system2 reasoning dispositions and ideology.

The source of my interest in such data is the increasing prevalence of what I'll call -- in recognition of Chris Mooney's role in synthesizing the underlying studies --  the Republican Brain Hypothesis (RBH). RBH posits a relationship between conservative political positions and use of low-effort, low-quality, biased, etc. reasoning styles. RBH proponents--  Mooney in particular-- conclude that this link makes Republicans dismissive of policy-relevant science and is thus responsible for the political polarization that surrounds climate change.

Although I very much respect Mooney's careful and fair-minded effort to assemble the evidence in support of RBH, I remain unpersuaded. First, RBH doesn't fit cultural cognition experimental results, which show that the tendency to discount valid scientific evidence when it has culturally non-congenial implications is prominent across the ideological spectrum (or cultural spectra).

Second, as far as I can tell, RBH studies have all featured questionable measures of low-level information processing. The only validated measures of system 1 vs. 2 dispositions -- i.e., the only ones that has been shown to predict the various forms of cognitive bias identified in decision science -- are Shane Frederick's Cognitive Reflection Test (CRT) and Numeracy (CRT is a subcomponent of the latter).  The RBH studies tend to feature highly suspect measures like "need for cognition," which are based on study subjects' own professed characterizations of their tendency to engage in critical thinking.

So why are researchers who are interested in testing RBH not using (or if they are using, not reporting data on) the relationship between CRT & political ideology?

A few months ago, I reported in a blog post some data that suggested the being Republican and conservative has a small positive correlation with CRT. In other words, being a conservative Republican predicts being slightly more disposed to use systematic or system 2 reasoning.

The relationship was too small to be of practical importance -- to be a plausible explanation for political polarization on issues like climate change -- in my view. But the point was that the data suggested the opposite of what one would expect if one credits RBH!

The relationship between CRT and the cultural worldview measures was similarly inconsequential -- very small, off-setting correlations with Hierarchy and Individualism, respectively.

2. New data

Okay, here are some new CRT (Cognitive Reflection Test) data that reinforce my doubt about RBH (the "Republic Brain Hypothesis").

The data come from an on-line survey carried out by the Cultural Cognition Project using a nationally representative sample (recruited by the opinion-research firm Polimetrix) of 900 U.S. adults.

The survey included the 3-item CRT test, various demographic variables, partisan self-identification (on a 7-point scale), self-reported liberal-conservative ideology (on a 5-point scale) and cultural worldview items.

Key findings include:

  • Higher levels of education and greater income both predict higher CRT, as does being white and being male. These are all results one would expect based on previous studies.
  • Also consistent with the newer interesting studies, religiosity predicts lower CRT. (I measured religiosity with a composite scale that combined responses to self-reported church attendance, self-reported personal importance of religion, and self-reported frequency of prayer; α = 0.87).  
  • However, liberal-conservative ideology has essentially zero impact on CRT, and being more Republican (on the 7-point partisan self-identification measure; but also in simple binary correlations) predicts higher CRT. Not what one would expect if one were betting on RBH!
  • Being more individualistic than communitarian predicts higher CRT, being more hierarchical than communitarian predicts essentially nothing. Also not in line with RBH, since these cultural orientations are both modestly correlated with political conservativism.

Now, those are the simple, univariate correlations between the individual characteristics and CRT (click on the thumbnail, right, for the correlation matrix).

But what is the practical significance of these relationships?

 

To illustrate that, I ran a series of ordered logistic regression analyses (if you'd like to inspect the outputs, click on the thumbnail to left). The results indicate the likelihood that someone with the indicated characteristic would get either 0, 1, 2, or all 3 answers correct on the CRT test.

As illustrated in the Figures above, these analyses reveal that the impact of all of these predictors is concentrated on the likelihood that someone will get 0 as opposed to 1, 2, or 3 answers correct. That is, the major difference between people with the "high-CRT" characteristic and those with the "low-CRT" one is that the former are less likely to end up with a goose egg on the test.

Indeed, that's all that's going on for both religiosity and partisan self-identification; there's no significant (& certainly no meaningful!) difference in the likelihood that those who are high vs. low in religiosity, or who are Republican in self-identification vs. Democrat, will get 1, 2 or 3 answers correct--only whether they will get more than 0.

The likelihood of getting 1 or 2 correct, but not 3, is higher for men vs. women and for more educated vs. less educated individuals. But the differences -- all of them -- look pretty trivial to me. (Not that surprising; few people are disposed to engage in system 2 reasoning on a consistent basis.)

Note, too, that there's essentially no difference between "hierarchical individualists" and "egalitarian communitarians," the members of the cultural communities most divided on environmental issues including climate change. Also none when liberal-conservative ideology and party affiliation are combined.

These are models that look at the predictors of interest in relation to CRT but in isolation from one another. I think it's easy to generate a jumbled, meaningless model by indiscriminatingly "controlling" for co-variates like race, religiosity, and even gender when trying to asses the impact of ideologies and cultural worldviews, or to "control" for ideology when assessing the impact of worldviews or vice versa; people come in packages of these attributes, so if we treat them as "independent variables" in a regression, we aren't modeling people in the real world (more on this topic in some future post).

But just to satisfy those who are curious, I've also included a "kitchen sink" multivariate model of that sort. What it shows is that religion, race, education, and income all predict CRT independently of one another and independently of ideology and cultural worldivews. In such a model, however, neither ideology nor cultural worldviews predict anything significant for CRT.

3. Bottom line

So to sum up -- when we use CRT as the measure of how well people process information, there's no support for RBH. In fact, the zero-order effect for political-party affiliation is in the wrong direction. But the important point is that the effects are just too small to be of consequence -- too tiny to be at the root of the large schisms between people with differing ideological and cultural worldviews over issues involving policy-relevant science.

What does explain those divisions, I believe, is motivated reasoning, a particular form of which is what we are looking at in studies of cultural cognition.  

The lack of a meaningful correlation between CRT, on the one hand, and cultural worldviews and political ideologies, on the other, is perfectly consistent with this explanation for risk-perception conflicts, because the evidence that supports the explanation seems to show that motivated reasoning is ample across all cutural and ideological groups.

Indeed, motivated reasoning, it has long been known (although recently forgotten, apparently), affects both system 1 (heuristic) and system 2 (systematic reasoning).  Accordingly, far from being a "check" on motivated reasoning, a disposition to use system 2 more readily should actually magnify the impact of this sort of distortion in thinking.

That's indeed exactly what we see: as people become more numerate -- and hence more adept at system 2 reasoning -- they become even more culturally divided.

To be sure, being disposed to use heuristic reasoning -- or simply unable to engage in more technical, systematic modes of thought -- will produce all sorts of really bad problems. But the problem of cultural polarization over policy-relevant science just isn't one of them.

In my opinion, the sooner we get that, the sooner we'll figure out a constructive solution to the real problems of science communication in a diverse, democratic society.

Saturday
May052012

Krugman acknowledges cultural cognition (at least in others!)

The point of the cool Justin Fox post that I noted yesterday now has been seconded by Paul Krugman, who says he already knew this -- that cultural cognition contrains public acceptance of scientific evidence -- based on the failure of his own columns to persuade people who disagree with him:

Justin Fox has an interesting post documenting something I more or less knew, but am glad to see confirmed: People aren’t very receptive to evidence if it doesn’t come from a member of their cultural community. This has been blindingly obvious these past few years.

Consider what the different sides in economic debate have been predicting these past six or seven years. If you got your views from, say, the Wall Street Journal editorial page, you knew – knew – that there was no housing bubble, that America in 2008 wasn’t in recession, that budget deficits would send interest rates sky-high, that the Fed’s expansion of its balance sheet would produce huge inflation, that austerity policies would lead to economic expansion.

That’s quite a record. And yet I’m well aware that many people – including people with real money at stake – consider the WSJ a reliable source and people like, well, me flaky and unbelievable. Much of this is politics, of course, but that’s intertwined with culture: the kind of people who turn to the WSJ, or right-wing investment sites can clearly see that I’m a latte-sipping liberal who probably favors gay rights and doesn’t worship the financially successful (I actually prefer good filter coffee, black, but that’s otherwise accurate), and just not part of their tribe.

I suppose that in my quest to improve policy and understanding I should be trying to fit in better – lose the beard, learn to play golf, start using “impact” as a verb. But I probably couldn’t pull it off even if I tried. And as a result there will always be a large group of people who will never be moved by any evidence I present.

Friday
May042012

Blind Voter-Candidate Matchmaking Site to Reduce Partisan Bias in Voter Perception?

I'm eager to hear your reactions to Elect Your Match!, a website that would blindly match voters to presidential candidates based on the similarity of their responses to a series of policy statements. The voters and candidates respond to the same series of statements on a scale of slightly/moderately/strongly disagree or agree. The statements are candidate generated: they each submit five statements on separate issues, and respond to their own and their opponents’ statements on the same scale as voters, indicating whether they slightly/moderately/strongly disagree or agree with each one. The statements would not mention candidate or party identity. In choosing these statements, candidates define the primary policy issues at stake in their campaign.

There are sites making very good efforts along these lines (mentioned in the article), providing thorough information and showing visitors how candidates relate to their stance issue-by-issue, as well as generating a match based on any range of issues the visitor selects. Elect Your Match! would simplify these models to route visitors through one short standardized questionnaire that sets forth the primary election issues, defined by the candidates themselves, and only recommending one comprehensive best-matching candidate. Simplifying the site's primary interface to give only one comprehensive match based on a preset agenda might make it easier and more appealing for those less engaged in politics, who may not have a sense of what issues are most important to them or to the election. In order for the site to provide a single candidate match based on a preset agenda, it is important that the candidates to themselves set the agenda defining the issues and provide their own responses, as opposed to a third-party determining the issues and rating the candidates’ positions. 

In addition to informing voters, a site like this could work to reduce partisan identity biasing voters' perceptions of candidates. I.e., Studies suggest that voters overestimate the extent that the positions of candidates sharing their partisan identity match their own policy preferences. In other words, voters erroneously “see their favorite candidates’ stands as closer to their own and opposing candidates’ stands as more dissimilar than they actually were.” Larry M. Bartels, The Irrational Electorate, The Wilson Quarterly (Autumn 2008). Or that voters more readily learn information about candidates that is congenial to their partisan identity, and discount facts that are not. Jennifer Jerit & Jason Barabas, Partisan Perceptual Bias and the Information Environment, Presented at the 2011 annual meeting of the Southern Political Science Association.

I’m curious about how a this advances the goals of the CCP: On one hand, it informs voters as to the candidate that really best matches their own outlook, and aims to minimize partisan identity-based bias in evaluating candidates. On the other hand, one seeking to advance the goals of CCP might desire a means for promoting more interpersonal deliberation (that could perhaps do more to update viewpoints and build consensus around polarizing issues in the election)(See also Bruce Ackerman & James Fishkin, Deliberation Day (2004)). As is suggested in the article, the site might have a deliberative component that allows interested visitors to browse more deeply than the primary questionnaire, to enter issue-specific segments of the site that would prompt them to interact with or respond to statements presenting arguments on either side of the issue. Perhaps these issue-specific segments could host an ongoing conversation posting visitors’ comments and responses to arguments on either side of the issue.

Friday
May042012

Cultural cognition & expert assessments of technological innovation

There's a great blog post by Justin Fox over at the Harvard Business Review's HBR Blog.

Fox argues that cultural cognition dynamics are likely to influence not only public perceptions of risk but also market-related assessments and decisionmaking within groups one might expect to be more focused on money and data than on meaning.

As illustration, he offers an amusing (for the reader) account of the reception afforded a recent column of his on expert assessments of technological innovation in the internet era.

I wrote a post here at hbr.org on whether the Internet era has been a time of world-changing innovation or a relative disappointment. It was inspired by comments from author Neal Stephenson, who espoused the latter view in a Q&A at MIT. His words reminded me of similar arguments by economist Tyler Cowen (if I had enough brain cells to remember that Internet megainvestor Peter Thiel had been saying similarthings, I would have included him, too). So I wrote a piece juxtaposing the Stephenson/Cowen view with the work of MIT's Erik Brynjolfsson, who has been amassing evidence that a digitization-fueled economic revolution is in fact beginning to happen.

If I had to place a bet in this intellectual race, it would be on Brynjolfsson. I've seen the Internet utterly transform my industry (the media), and I imagine there's lots more transforming to come. But I don't have any special knowledge on the topic, and I do think the burden of proof lies with those who argue that economic metamorphosis is upon us. So I wrote the piece in a tone that I thought was neutral, laced with a few sprinklings of show-me skepticism.

When the comments began to roll in on hbr.org, though, a good number of them took me to task for being a brain-dead, technology-hating Luddite. And why not? There's a long history of journalists at legacy media organizations writing boneheaded things about the Internets being an abomination and/or flash in the pan (one recent example being this screed by Harper's publisher John McArthur). Something about my word choices and my job title led some readers to lump me in with the forces of regression, and react accordingly.

When I saw that Wired.com had republished my post, I cringed. Surely the technoutopians there would tear the piece to nanoshreds. But they didn't. Most of the Wired.com commenters instead jumped straight into an outrage-free discussion of innovation past and present.

That's probably because, if there is one person in the world whom Wired.com readers consider a "knowledgeable member of their cultural community," it is Neal Stephenson. This is the man who described virtual reality before it was even virtual, after all. I'm guessing that Wired.com readers were conditioned by the sight of Neal Stephenson's name at the beginning of my post to consider his arguments with an open mind. Here at hbr.org, where we don't require readers to have read the entire Baroque Cycle before they are allowed to comment, Stephenson was just some guy saying things they disagreed with.

Fox's assessment of the tendency of people to credit arguments of experts with whom they have a cultural affinity is consistent with our HPV study. But what's really cool is that the reaction of the Wired.com readers shows how a group that might be culturally predisposed to reject a particular message will actually give it open-minded consideration when they see that it originates (or at least has received respectful and serious attention) from someone with whom they identify.

Anyway, I'm psyched to learn that Fox sees our methods and framework as relevant to the market-related phenomena he writes on -- not only because it's cool to think that cultural cognition can shed light on those things but also because I really loved his Myth of the Rational Market. Was tied (with The Clockwork Universe) for best book I read all of last yr!

Saturday
Apr282012

A "frame" likely to generate consensus that climate change is not happening (and/or that geoengineering is safe)

Interesting piece, my guess is that this idea could actually end polarization over climate change -- by furnishing egalitarians and hierarchs alike strong emotional motivation to deny there's any danger after all! 

Also, although the author maintains that engineering humans is "safer" than geoengineering, my guess is that people would see geoengineering itself as less risky when they consider it in relation to "human engineering" than when they consider it on its own  -- precisely b/c human engineering is pretty much the creepiest thing that anyone can imagine.

Which isn't to say the author's argument is wrong on the merits!

 

Friday
Apr272012

More religion & CRT--where's ideology & CRT?!

Science this week published an article that finds low CRT predicts religiosity & that backs this finding up w/ experimental data:

It's a really excellent study. The experiments were ingenious. It should be pointed out, though, that this finding corroborates another excellent one, Shenhav, A., Rand, D.G. & Greene, J.D. Divine intuition: Cognitive style influences belief in God. Journal of Experimental Psychology (2011), advance online doi:10.1037/a0025391.

I'm waiting, patiently, for someone to publish some data on correlation between CRT & liberal-conservative ideology. As I've noted before, data that CCP has collected suggests that there is virtually none -- or that there are weak offsetting correlations between different cultural dimensions of conservatism (hierarchy & individualism).

The reason I'm waiting is that such data would contribute a lot to the increasing interest in the relationship between ideology & quality/style of cognitive processing (the Republic Brain hypothesis or "RBH," let's call it). Shane Frederick's CRT scale & Numeracy (which incorporates CRT) are the only validated indicators of the disposition to use systematic or System 2 reasoning as opposed to heuristic or system 1. So it would, of course, be super useful to see what the CTR verdict is on whether conservatives & liberals differ in processing.

Being patient while waiting is becoming more difficult. I've got to believe that such evidence is already in hand; given the interest in the RB hypothesis, surely someone (likely multiple people) have thought to try to test it w/ the CRT measure. It would be sad to discover that the reason the data haven't been reported is that they don't fit the hypothesis -- that is, don't show that liberals are more "systematic" or System-2 disposed in their thinking. 

Actually, I suppose I have data in hand, but at least I've blogged on them!

Oh-- if I'm wrong to think that this is a matter on which no one has yet presented data, please tell me and I'll happily acknowledge my error & share the relevant references w/ other curious people. 

Saturday
Apr212012

Deliberations & identity formation

CCP member John Gasitil, along w/ co-authors, has a new article out presenting evidence that highly participatory forms of democratic deliberation promote a distinctive shared identity that transcends more particular and potentially divisive ones, such as those founded on cultural affiliations.

The analysis was largely qualitative: a case study based on impressionistic analyses of transcripts from citizen deliberations associated with the Australian Citizens' Parliament. I know JG has more data on the Australian Citizens' Parliament, including some that admit of more systematic analysis, in hand. Good way to do research since the convergence of results from more interpretive forms of empirical analysis and more quantitative -- if they do indeed converge! -- make the conclusions of both more worthy of being credited.

I know from experience that collective deliberations on baseball are not sufficient to enable Gastil to transcend his partisan cultural identity as a Tigers fan.

Felicetti, A., Gastil, J., Hartz-Karp, & Carson, L. Collective Identity and Voice at the Australian Citizens' Parliament. Journal of Public Deliberation 8, article 5 (2012):

This paper examines the role of collective identity and collective voice in political life. We argue that persons have an underlying predisposition to use collective dimensions, such as common identities and a public voice, in thinking and expressing themselves politically. This collective orientation, however, can be either fostered or weakened by citizens’ political experiences. Although the collective level is an important dimension in contemporary politics, conventional democratic practices do not foster it. Deliberative democracy is suggested as an environment that might allow more ground for citizens to express themselves not only in individual but also in collective terms. We examine this theoretical perspective through a case study of the Australian Citizens’ Parliament, in which transcripts are analyzed to determine the extent to which collective identities and common voice surfaced in actual discourse. We analyze the dynamics involved in the advent of collective dimensions in the deliberative process and highlight the factors—deliberation, nature of the discussion, and exceptional opportunity—that potentially facilitated the rise of group identities and common voice. In spite of the strong individualistic character of the Australian cultural identity, we nonetheless found evidence of both collective identity and voice at the Citizens’ Parliament, expressed in terms of national, state, and community levels. In the conclusion, we discuss the implications of those findings for future research and practice of public deliberation.

 

Thursday
Apr192012

Ethical guidelines for science communication informed by cultural cognition research

People often express concern to me about the normative implications of research that identifies how cultural cognition influences perception of risk and related facts and how those influences can be anticipated in structuring science communication.

I am glad they are concerned, because I am, too. If I thought that people who consume our research did not reflect on such concerns, I'd be even more worried about what I do. Knowing that others see normative issues here also means that I can share with them my own responses & see if they think I've got things right &/or can do better.

Some "Guidelines" follow. But they are not really "guidelines" in the sense of a codified set of rules or standards (I'm skeptical, in fact, that anything morally complicated can be handled with such things). Rather, they are more like prototypes that when considered together reflect what for me seems the right moral orientation to our work.  Would be happy to receive & post additional "guidelines" of this nature (along w/ any commentary their authors wish to append) & also grateful to receive feedback from anyone who takes issue with any of these or with the attitude/orientation they are meant to convey.

1. No lying. No need for elaboration here, I trust.

2. No manipulation. Likely also self-explanatory, but an example might be useful. Consider how Merck tried to shape public opinion toward Gardasil, its HPV vaccine: by using secret campaign contributions to "persuade" a southern, religious, conservative politician -- Texas Governor Rick Perry -- to issue an executive order mandating vaccination of middle school girls.

It was fine for Merck to try to assure that parents would learn about the benefits of the vaccine. It wasn't even wrong for it to enlist communicators whose cultural identities would make them credible sources of sound information

But it should have been open that it was trying to engage people this way.

Obviously, the whole immoral plan blew up in Merck's face--actually generating distrust of Gardasil among a diverse range of cultural groups. Nice work, gun-for-hire, private-industry counterparts of those who study the science of science communication in order to promote the common good!

But the strategy would have been wrong even if Merck had gotten away with it because it was managing the information environment in a way that the message recipients would themselves have resented. They were using people's reasoning, not enabling people's reasoning.

3. Use communication strategies and procedures only to promote engagement with information--not to  induce conclusions. Some people say that cultural-cognition informed communication strategies are a form of "marketing." Fine, I say. So long as what's being marketed is not a preferred position on an issue of science & policy but rather a decisional state or climate in which people who want to make decisions based on the best available scientific information are most likely to take note of and give open-minded consideration to it. 

The HPV-vaccine disaster again supplies an example. Parents of all cultural worldviews want to have the best available information on how to promote the health of their children. It would be perfectly fine, in my view, for a communicator to use cultural cognition research to identify how to promote open-minded engagement with information on the HPV vaccine.  

So if public health officials self-consciously decided to rely on a culturally diverse array of honestly motivated science communicators in order to forestall creation of any perception that positions on the vaccine were aligned asymmetrically with cultural outlooks--that would have been okay.

Also would have been okay to have resisted Merck's stupid, market-driven decision to seek fast-track approval of a girls-only vaccine and to promote inclusion of it on the schedule of mandatory school vaccinations--a marketing strategy that made cultural polarization highly likely.  Parents who love their children wouldn't want to be put into a communication environment in which their honest assessment of the health needs of their daughters or sons would be distorted by culturally antagonistic meanings unrelated to health.

4. Use strategies and procedures to promote engagement only when you have good reason to believe that engagement fits the aims and interests of information recipients. Parents trying to decide what is the best health interests of their children want to engage the information from the mindset that best promotes an accurate assessment of the evidence. But sometimes people want to engage information in a way that reliably connects them to stances that fit their cultural style. Leave them alone; so long as they aren't hurting anyone else, they are entitled to manage their personal information environment in a way that promotes contact with their own conception of the good life.

5. Don't help anyone who has ends contrary to these guidelines. Like, say, a pharmaceutical company that in its drive to make a buck is willing to manipulate people by covertly inducing individuals they trust to vouch for the effectiveness and safety of some treatment.

6.  Do help anyone -- regardless of their cultural worldview -- who is genuinely seeking to promote reflective engagement with information when such engagement fits the interests and aims of recipients. Like, say, a pharmaceutical company that wants to make a buck by openly and without manipulation satisfying the interest that people have in being able to consider scientifically valid information about the effectiveness and risks of a vaccine. 

Tuesday
Apr172012

MPSA climate change panel: report & slides

On Friday I was on a Midwest Political Science Association panel on public opinion & climate change. I presented Tragedy of the Risk Perceptions Commons (slides here). 

Michael Tesler presented interesting data that he argued show that elite rhetoric and not motivated cognition accounts for political divisions on climate change. I have a hard time conjuring the psychological model that would see the two operating independently of each other; to me they are not discrete mechanisms, but steps in a process (elite cues help create/transmit the meanings that then motivate cognition for ordinary individuals) & I wasn't sure exactly how the data supported the inference, but I'm eager to see the write up, at which point I'll either get it or explain why I don't think he is right!

Alexandra Bass presented data on media content to show that values influence climate change perceptions. The presentation was great. But I have to say I don't really get media-content studies in general; they seem to draw inferences the validity of which depend on the ratio of frequency of content to frequency of events in the world--something for which the analyses never present any data. I didn't get a chance, though, to read Bass's paper, so I will, & see if that helps me.

Mathew Nowlin, a member of Hank Jenkins-Smith's amazing risk-perception group at theCenter for Applied Social Research at the University of Oklahoma, presented a cool paper on education, climate change knowledge, and politcal polarization.

Finally, Rebecca Bromley-Trujillo backed data out of the American National Election Study to support the hypothesis that "core political values"-- "such as equality"-- "are an important predictor of climate change attitudes, beyond other standard determinants of political attitudes, like partisanship or ideology." I found the cliam convincing, but I was admittedly predisposed to believe it.

Monday
Apr162012

Where is "what does Trayvon Martin case mean, part 3"?

It's coming soon. But not before I get done learning from my class what they think. I also learned a lot from Randy Kennedy's lecture at Leslie College last week. I hope he writes up his lecture so that others can think about his reflections as well (I'm sure I'll say more about Kennedy in "part 3").

Saturday
Apr142012

Cultural cognition--plus lots of other relevant things-- & nuclear energy: experts *get it*

Came across a great blog on public perceptions of nuclear risk at the Neutron Economy & then found a thoughtful reaction to it at Areva North America: Next Energy Blog.

In addition to being well-crafted and informative, the posts were immensely heartening.

Written by and for people who do work relating to nuclear energy, both displayed keen awareness of the science of public risk perceptions and science communication. (Cultural cognition was  featured, but was--very appropriately--not the only dynamic that was addressed.)  

What's more, rather than the frustrated hand-wringing and finger-pointing that experts (and many others) often (understandably but not helpfully) display when confronted with public controversy over risk, both evinced an uncomplaining, matter-of-fact dedication to making sense of how the public makes sense of the world.

From Neutron Economy:

To summarize - providing education and facts are good, useful even - but on their own insufficient without presenting those facts in a context which engages with the deeply-held values of the audience. To produce actual engagement - and even inducement to support - requires a producing a context of facts compatible with the values of those one is trying to reach. In other words, for the case of nuclear, it means going beyond education and comparative evaluation of risk (again, to emphasize, both of which are valid in and of themselves) and placing these within the framework of how this speaks to the values of the audience....

[I]it is the job of the nuclear professionals (as members of the "technical community") to do our best to provide an accurate technical framework for these evaluations of risk by the public, such that they can make the most sound decisions on risk. Meanwhile it is the job of nuclear communicators and advocates to speak to values, as to produce more fair evaluations of both the benefits and risks of nuclear, particularly in the context of available energy choices.

From Areva North America: Next Energy Blog

So, “pure” facts don’t tend to change our minds very often. And surprisingly, presenting facts alone when encouraging a new perspective can often result in the opposite effect on people who disagree....

Which naturally leads to our next question, “If cultural influence is so strong on perceiving facts, is trying to educate people of the beneficial facts about nuclear energy hopeless?”

We agree with Steve’s answer, “Not at all.”

But the key is to frame our factual and technically accurate answers within the cultural framework understanding of those we are trying to engage.

Reading these words made me believe that it is not at all unrealistic to anticipate that the practice of science will in the not too distant future be happily and productively integrated with the science of science communication.

Thursday
Apr122012

Is evoking emotion a means of communicating "factual information" on risk and the like? The Wittlin test

I would say "yes, so long as..." and then launch into a long, abstract account of emotion as a form of cognitive perception that is uniquely suited to apprehending the significance of information for goods a person values (see Damasio, Descartes' Error; Nussbaum, Upheavals of Thought) but that is also vulnerable to bias and hence manipulation, blah blah...

Maggie Wittlin, however, has sent me an email that convinces me there is a much simpler answer: unconditionally"yes" or unconditionally "no" depending on what the emotional appeal is about and what the cultural worldview is of the person answering the question! 

Two recent cases (one argued today) seem to be asking the question: are images that cause strong emotional reactions toward the subject matter informative?  Or are they mere advocacy?  I think you'll get two different answers based on (1) whether you ask and egalitarian or a hierarch (serious individualists might be consistent) and (2) which case you ask about:

On the right, we have the Texas sonogram case, where CJ Edith Jones writes, "Though there may be questions at the margins, surely a photograph and description of its features constitute the purest conceivable expression of 'factual information.' If the sonogram changes a woman’s mind about whether to have an abortion -- a possibility which Gonzales says may be the effect of permissible conveyance of knowledge, Gonzales, 550 U.S. at 160, 127 S. Ct. at 1634 -- that is a function of the combination of her new knowledge and her own 'ideology' ('values' is a better term), not of any 'ideology' inherent in the information she has learned about the fetus."

On the left, we have the challenge to the FDA cigarette warning label regulations, where "Stern also argued today that smokers do not fully understand tobacco’s harmful effect on health. The images, he argued, communicate the risk of smoking more effectively than do text warnings."  On the other hand, "Noel Francisco, representing R.J. Reynolds Tobacco Co. in the dispute, said the labels cross the line from fact-based to issue advocacy. The government is triggering a negative emotional reaction."

 

 

Sunday
Apr082012

What does the Trayvon Martin case mean? What *should* it mean? part 2

In part 1, I argued that what the Trayvon Martin case means won’t turn on what the facts are found to be.

On the contrary, what we understand the facts to be will turn on what the case means to us as members of one or another cultural group.

Public reactions to the case display the characteristic signature of cultural cognition--the tendency of people to fit the perception of legally consequential facts to their group commitments.

The influence of cultural cognition explains why people with different outlooks and identities are forming such strong and divergent understandings of what happened despite their having almost no clear evidence to go on.

And it predicts (on the basis of experimental studies) that they are likely to continue to be divided just as bitterly no matter how much evidence comes to light—even if it turns out, say, that an unobserved neighbor made a digital recording of the attack with his or her cell phone (or high-resolution camera).

But as I said in my last post, this conclusion doesn’t mean there’s no point talking about the case. We should be addressing the meanings that divide us on an issue like this, because they divide us on lots of things—not just the use of violence by individuals of one race on those of another, or even the use of it by the police against private citizens, but also matters as diverse as whether climate change is occurring or whether schools should vaccinate pre-adolescent girls against HPV.

This sort of division, in my view, is a barrier to our coming to democratic consensus on a wide variety of policies that promote our common welfare in ways perfectly compatible with our diverse cultural values.

The question, in my view, is how we might use the Trayvon Martin case as an occasion for a meaningful discussion about meanings in our political life.

In this post, I’ll identify how not to do it.

2.  Replaying history: “shall issue,” “stand your ground,” and the culture of honor 

It turns out that we have been “discussing” cultural meanings since pretty much the start of this affair. But we’ve been doing it in the idiom of culturally motivated empirical assertions about the impact of law.

Two laws, in particular—one relating to guns and the other to the use of self-defense.

Florida is one of the 38 states with so-called “shall issue” laws, which essentially mandate that any adult citizen who has not been convicted of a felony or diagnosed with a mental illness be issued a permit to carry a concealed firearm in public.

It is also one of a dozen or states that has recently enacted “stand your ground” laws, which provide that a person “who is attacked in any  [public] place where he has a right to be has no duty to retreat” before resorting to deadly force to defend him- or herself from a potentially lethal assault. (Media reports miscalculate the number—apparently counting laws that existed before the recent spate of “stand your ground” enactments and also mixing in ones that relate to the use of deadly force in the home.)

George Zimmerman, the shooter in this case, was carrying a concealed handgun pursuant to a “shall issue” license. He also asserts that his fatal shooting of Martin—whom Zimmerman was tailing because he looked “suspicious”—was an act of self-defense.

Unsurprisingly, there has been a barrage of commentaries attributing violent assaults to “shall issue” and “stand your ground” laws, and a counter-barrage crediting these laws with reducing the incidence of violent crime.

These empirical arguments are specious. Indeed, they are part and parcel of a longstanding cultural division in our political life. Zealots who crave (or indeed profit from) such debate are exploiting the Trayvon Martin case to deepen that division—crowding out discussion of things that really matter.

a. The evidence. There is no persuasive empirical evidence that “shall issue” laws have any impact on the rate of violent crime.

Don’t take my word for it: that's the conclusion the National Academy of Sciences reached in an “expert consensus” report, which examined numerous empirical studies on the matter and concluded that it was simply impossible to say one way or another whether such laws increase crime or instead decrease it as a result of their effect in deterring violent predation.

The evidence on how “stand your ground” laws have affected violent-crime rates is no more conclusive. Indeed, it’s hard to conceive of how it could be.

These laws have all been enacted in the last decade. Yet the rule that a person can “stand his ground”—that he has no duty to retreat before using deadly force in self-defense—has been the majority rule among U.S. states for over a century. It was already the rule, in fact, in many of the states that have recently adopted “stand your ground” laws (e.g., Georgia, Indiana, Kentucky, Montana, Oklahoma, Utah, Washington, and West Virginia).

Before it enacted its “stand your ground” law, Florida apparently did make the lawful use of deadly force in self-defense conditional on a duty to avail oneself of any safe route of retreat, at least when an individual was attacked outside his or her home. But violent crime has decreased in that state over the the last decade.

Indeed, violent crime has decreased throughout the U.S. during that time. Identifying all the potential causes for this trend, and disentangling them from one another in order to determine what impact (if any) enacting or not enacting a “stand your ground” law has had on the velocity of crime abatement in any particular state, would involve overcoming all the statistical difficulties that led the National Academy of Sciences to toss its hands up in the air when it tried to measure the impact of “shall issue” laws on violent crime.

Any commentator who asserts with confidence that either “stand your ground” laws or “shall issue” laws increase or decrease crime simply doesn’t know what he or she is talking about.

b. Culture, cognition, and political opportunism. What there is persuasive empirical evidence of, however, is the biasing impact of cultural cognition on individuals’ assessments of the impact of laws like these.

Individuals with egalitarian, communitarian values—for whom the gun is a noxious symbol of patriarchy, racism, indifference to others, and hostility to reason—predictably construe the evidence as showing that lax gun control laws increase deadly violence.

In contrast, those with hierarchical and individualistic worldviews—for whom the gun is associated with positive values such as courage, self-reliance, and honor—predictably fit their perceptions of the evidence to the culturally congenial conclusion that shall issue laws decrease homicide rates.

As a result of these same dynamics, moreover, they both tend to misperceive that the weight of expert evidence is on their side.

The same cultural divisions mark reactions to the duty to retreat in self-defense laws. Indeed, the advent of the “stand your ground” movement is intimately connected to cultural conflict over guns.

As indicated, the motivation for these statutes wasn’t to change the law. On the contrary, it was to provoke culturally grounded conflict.

The biggest threat to the gun industry is not that guns will be regulated out of existence. It is that future generations of Americans, as they become progressively more removed from the cultural norms that motivate people to buy guns, will simply lose interest in owning them.

Orchestrated by the NRA, the campaign to enact “stand your ground” laws is a booster shot for those norms. By design, “stand your ground” laws radiate individualistic and hierarchical values. The enactment of them—particularly over the predictable, and predictably strident, opposition of groups associated with egalitarian and communitarian values—broadcasts the vitality of a pro-gun ethos, a signal that can be expected to inculcate the same in those who receive that signal.

c. We’ve seen this before; enough already! The cultural battle over “stand your ground” laws is actually an historical replay.

Just over a century ago, courts in the South and West adopted the “no retreat” rule. They called the “true man” doctrine, a label that recognized that a man whose character is “true” (that is, in order, or straight, like a “true beam”) appropriately values his own liberty more than the life of someone who wrongfully threatens it.

Northeastern jurists and commentators denounced this departure from the traditional “retreat to the wall” position as an expression of the “feeling which is responsible for the duel, the war, for lynching.” The echo of the Civil War reverberated through this legal debate for a period for some three decades.

Then, in one of the most brilliant demonstrations of statesmanship in the history of America jurisprudence, Justice Holmes defused this controversy by draining it of its expressive significance.

It’s futile, he reasoned in the 1921 decision of Brown v. United States, for the law to demand that someone who faces a deadly threat “pause to consider whether a reasonable man might not think it possible to fly with safety.” “Detached reflection cannot be demanded in the presence of an uplifted knife."

Just like that, the “true man doctrine” became the “scared shitless man defense.” The South and the West got the rule they wanted, but only after it had been gutted of the meaning that galled the Northeast.

Everyone lost interest, and the issue went away. Gun control essentially took its place as the front of the battle over the status of honor norms in U.S. law and culture.

But then 85 years later the NRA came to the brilliant realization that it could subsidize the culture war over guns by reviving the “true man” doctrine in the form of the new, Clint-Eastwoodesque  “stand your ground” laws. 

Not surprisingly, the most receptive states were located in regions of the country that already had the “true man” doctrine.

But no matter: the point wasn’t to change the law; it was to agitate and inflame.

The NRA could count on agitation, of course, only if the egalitarian communitarian opponents of the honor culture—the descendents of the “true man” critics—took the bait. Which of course, they have done. They'd be out of work too without this sort of conflict.

Hey—I didn’t know him. But I think I can safely say, “You are no Justice Holmes,” to the legions of commentators now seizing on the Trayvon Martin as an occasion to raise the volume in equally tendentious and tedious “shall issue” and “stand your ground” debates.

I’d also like to tell them to just back off.  Not only are you needlessly sowing division; you are destroying the prospects for a meaningful conversation of the values that—despite our cultural differences—in fact unite us.

References

Dan M. Kahan, The Secret Ambition of Deterrence, 113 Harv. L. Rev. 413 (1999).

Dan M Kahan & Donald Braman, More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions, 151 U. Pa. L. Rev. 1291 (2003). 

Dan Kahan, Donald Braman, Geoffrey Cohen, John Gastil & Paul Slovic, Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition, 34 Law Human Behav 501 (2010).

Dan M. Kahan, Donald Braman, John Gastil, Paul Slovic & C. K. Mertz, Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception, 4 J. Empirical Legal Studies 465 (2007).

Dan M. Kahan, Hank Jenkins-Smith & Donald Braman, Cultural Cognition of Scientific Consensus, 14 J. Risk Res. 147 (2011).

Dan M. Kahan, The Cognitively Illiberal State, 60 Stan. L. Rev. 115-54 (2007).

Saturday
Apr072012

Another cool book: van Rijswoud, Public faces of science

Found another really great book on-line:

Erwin van Rijswoud, Public faces of science: Experts and identity work in the boundary zone of science, policy and public debate (Radboud University Nijmegen, 2012).

It's actually van Rijswould's doctoral dissertation.

But anyway, the work examines Dutch scientists' impressions of how their work and expertise were received in various public policy debates, including ones on H1N1 vaccination, flood control, and HPV vaccination of adolescent girls.

The analyses are based on "biographical narrative." At the beginning of the work, he explains this method, which involves analytically motivated synthesis of interviews with the scientists, supplemented with other materials, and presented in a form that uses story-telling elements not typical at all for social science work (unlike typical ethnography, the voice is much more internal, almost "first person"). 

I was really interested in vR's discussion of HPV, an issue the CCP group has also studied. I hadn't realized that the issue was controversial in the Netherlands, too (likely I should be embarrassed to say that). I did know that England didn't have any trouble implementing a national immunization program, so there are definitely some great lessons to be learned through comparative study.

Also hadn't realized that there was political dispute over expert flood control advice in the Netherlands. Actually, efficient flood management in Holland & other regions of the country is often offered as an example of what the successful integration of science into policymaking is supposed to look like!

Thanks to van Rijswoud & Radboud University for making his work widely available & at no charge!

Friday
Apr062012

What does the Trayvon Martin case mean? What *should* it mean? part 1

If one were to judge from the media coverage—the dueling depictions of the characters of the shooter and his victim; the minute dissections of fragmentary witness statements; the “expert” voice-identification of screams picked up in the background of a 911 call; the high-resolution scrutiny of  low-resolution of video footage of the shooter in police custody that reveal the existence/absence of telltale wounds—one would think that the significance of the Trayvon Martin case turns (or ultimately will turn) decisively on the facts.

In actuality, the opposite is true: the significance we attach to the case will determine our perception of the facts; and because what it signifies turns on cultural meanings that divide our society, the members of different groups will form highly opposed understandings of what happened that terrible night.

Does that mean it’s pointless to be discussing the case?

On the contrary. In my view, the public agitation the case has provoked is evidence of how important it is for us to have a public conversation about the diversity of our cultural outlooks and their relation to law, and that this case is an ideal occasion for addressing that issue.

But if we insist that the discussion take the form of competing, culturally partial (and even culturally partisan) renditions of the facts, we are highly unlikely to engage the real issues in a universally meaningful way. And in that circumstance, we can be sure that the sources of agitation will persist.

I have more to say than it makes sense to put in one post.  So regard this as installment 1 of 3.

1. Meanings are cognitively prior to fact

The Trayvon Martin case, polls unsurprisingly reveal, divides people along cultural lines.

In this sense, it is very much like a host of other high-profile types of cases: public altercations leading to a mixed-race killing (think Bernard Goetz and Howard Beach); the slaying (or mutilation; think Lorena Bobbitt) of sleeping men by female partners who allege chronic abuse; the prosecutions (William Kennedy Smith)—or not (Duke lacrosse)—of men alleged to have disregarded women's verbal resistance to sexual intercourse; forceful arrests of political protestors (Occupy Wall Street; Operation Rescue) pepper sprayed by police—or of fleeing drivers whose bodies are broken by the impact of their crashing cars (Scott v. Harris) or the fusillade of baton blows of their pursuers (Rodney King).

CCP has conducted experimental studies of cases like these. What we have found, in all of these contexts, is that people unconsciously form perceptions of fact that reflect their stance on the cultural meanings the cases convey.

Those committed to norms of honor and self-reliance, on the one hand, and those who value equality and collective concern, on the other; those who believe women warrant esteem for mastery of traditionally female domestic roles and those who believe women as well as men should be conferred status for success in civil society; those who place a premium on respect for authority and those who apprehend the abuse of it as a paramount evil—all see different things in these types of cases, even when they are forming their perceptions on the basis of the same evidence.

Moreover, members of all these groups know that what one sees (or claims to see; each group always suspects the other of disingenuousness) depends on who one is culturally speaking.

As a result, in controversies over these sorts of cases, those on both sides come to view competing factual claims as markers of opposing allegiances.  The ultimate resolution of these facts in courts of law, in turn, becomes evidence of who counts and who doesn’t in an our society.

These are identity-threatening conditions. It is the extreme anxiety that they provoke that explains how despite knowing next to nothing about what actually happened—because we have nothing more to go on than factual snippets embroidered with righteous denunciation in the media, or antiseptic renditions of the “facts of the case” in appellate reporters—we nevertheless become filled with passionate certitude about the events. The discovery that others disagree with us fills us with incredulity and rage.

And most extraordinary of all, this same environment of symbolic status competition explains why such disagreement persists in the face of the most compelling forms of evidence of all. Even when we literally see the events with our own eyes—as we do when they are recorded on video, e.g.—cultural cognition assures that we will disagree about we are seeing

We will disagree, in such instances, with those who hold values different from ours when we watch what we understand to be the same event.

Moreover, we will disagree with those who share our values if, as a result of a hidden experimental manipulation, we start with different impressions of the sort of event (abortion-clinic protest, or anti-war protest) we are watching.

Barely detectable above the cacophony in the Trayvon Martin case are a few lonely voices cautioning us not to jump to conclusions. We don’t really know enough about what happened, they rightly point out, to form such strong opinions.

But the truth is, we’ll never know what happened, because we—the members of our culturally pluralistic society—have radically different understandings of what a case like this means.

The questions are whether it makes sense to talk about that, and if so, what should we be saying?

References

Dan M. Kahan & Donald Braman, The Self-defensive Cognition of Self-defense, 45 Am Crim Law Rev 1 (2008).

Dan M. Kahan, The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 126 Harv. L. Rev. 1 (2011)

Dan M. Kahan, Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases, 158 University of Pennsylvania Law Review 729 (2010).

 

Dan M. Kahan, David A. Hoffman, Donald Braman, Danieli Evans & Jeffrey J. Rachlinski, They Saw a Protest: Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. (forthcoming 2012).

Mark Kelman, Reasonable Evidence of Reasonableness, 17 Critical Inquiry 798-817 (1991).