follow CCP

Recent blog entries

Some data on CRT & "Republican" & "Democratic brains" (plus CRT & religion, gender, education & cultural worldviews)

This is the latest in a series of posts (see here, here, here, here ...) on the relationship between ideology &/or cultural worldviews, on the one hand, and cognitive reasoning dispositions, on the other.

I've now got some new data that speak to this question -- & that say things inconsistent with the increasingly prominent claim that conservative ideology is associated with low-level information processing.

If you already know all about the issue, just skip ahead to "2. New data"; if you are new to the issue or want a brief refresher, read "1. Background" first.

1. Background

As discussed in a recent post, a series of studies have come out recently that present evidence--observational and (interestingly!) experimental--showing that the tendency to use heuristic or system 1 information processing ("fast" in Kahneman terms, as opposed to "slow" systematic or system 2) is associated with religiosity.

I expressed some agitation on the absence of reported data on the relationship of system 1/system2 reasoning dispositions and ideology.

The source of my interest in such data is the increasing prevalence of what I'll call -- in recognition of Chris Mooney's role in synthesizing the underlying studies --  the Republican Brain Hypothesis (RBH). RBH posits a relationship between conservative political positions and use of low-effort, low-quality, biased, etc. reasoning styles. RBH proponents--  Mooney in particular-- conclude that this link makes Republicans dismissive of policy-relevant science and is thus responsible for the political polarization that surrounds climate change.

Although I very much respect Mooney's careful and fair-minded effort to assemble the evidence in support of RBH, I remain unpersuaded. First, RBH doesn't fit cultural cognition experimental results, which show that the tendency to discount valid scientific evidence when it has culturally non-congenial implications is prominent across the ideological spectrum (or cultural spectra).

Second, as far as I can tell, RBH studies have all featured questionable measures of low-level information processing. The only validated measures of system 1 vs. 2 dispositions -- i.e., the only ones that has been shown to predict the various forms of cognitive bias identified in decision science -- are Shane Frederick's Cognitive Reflection Test (CRT) and Numeracy (CRT is a subcomponent of the latter).  The RBH studies tend to feature highly suspect measures like "need for cognition," which are based on study subjects' own professed characterizations of their tendency to engage in critical thinking.

So why are researchers who are interested in testing RBH not using (or if they are using, not reporting data on) the relationship between CRT & political ideology?

A few months ago, I reported in a blog post some data that suggested the being Republican and conservative has a small positive correlation with CRT. In other words, being a conservative Republican predicts being slightly more disposed to use systematic or system 2 reasoning.

The relationship was too small to be of practical importance -- to be a plausible explanation for political polarization on issues like climate change -- in my view. But the point was that the data suggested the opposite of what one would expect if one credits RBH!

The relationship between CRT and the cultural worldview measures was similarly inconsequential -- very small, off-setting correlations with Hierarchy and Individualism, respectively.

2. New data

Okay, here are some new CRT (Cognitive Reflection Test) data that reinforce my doubt about RBH (the "Republic Brain Hypothesis").

The data come from an on-line survey carried out by the Cultural Cognition Project using a nationally representative sample (recruited by the opinion-research firm Polimetrix) of 900 U.S. adults.

The survey included the 3-item CRT test, various demographic variables, partisan self-identification (on a 7-point scale), self-reported liberal-conservative ideology (on a 5-point scale) and cultural worldview items.

Key findings include:

  • Higher levels of education and greater income both predict higher CRT, as does being white and being male. These are all results one would expect based on previous studies.
  • Also consistent with the newer interesting studies, religiosity predicts lower CRT. (I measured religiosity with a composite scale that combined responses to self-reported church attendance, self-reported personal importance of religion, and self-reported frequency of prayer; α = 0.87).  
  • However, liberal-conservative ideology has essentially zero impact on CRT, and being more Republican (on the 7-point partisan self-identification measure; but also in simple binary correlations) predicts higher CRT. Not what one would expect if one were betting on RBH!
  • Being more individualistic than communitarian predicts higher CRT, being more hierarchical than communitarian predicts essentially nothing. Also not in line with RBH, since these cultural orientations are both modestly correlated with political conservativism.

Now, those are the simple, univariate correlations between the individual characteristics and CRT (click on the thumbnail, right, for the correlation matrix).

But what is the practical significance of these relationships?


To illustrate that, I ran a series of ordered logistic regression analyses (if you'd like to inspect the outputs, click on the thumbnail to left). The results indicate the likelihood that someone with the indicated characteristic would get either 0, 1, 2, or all 3 answers correct on the CRT test.

As illustrated in the Figures above, these analyses reveal that the impact of all of these predictors is concentrated on the likelihood that someone will get 0 as opposed to 1, 2, or 3 answers correct. That is, the major difference between people with the "high-CRT" characteristic and those with the "low-CRT" one is that the former are less likely to end up with a goose egg on the test.

Indeed, that's all that's going on for both religiosity and partisan self-identification; there's no significant (& certainly no meaningful!) difference in the likelihood that those who are high vs. low in religiosity, or who are Republican in self-identification vs. Democrat, will get 1, 2 or 3 answers correct--only whether they will get more than 0.

The likelihood of getting 1 or 2 correct, but not 3, is higher for men vs. women and for more educated vs. less educated individuals. But the differences -- all of them -- look pretty trivial to me. (Not that surprising; few people are disposed to engage in system 2 reasoning on a consistent basis.)

Note, too, that there's essentially no difference between "hierarchical individualists" and "egalitarian communitarians," the members of the cultural communities most divided on environmental issues including climate change. Also none when liberal-conservative ideology and party affiliation are combined.

These are models that look at the predictors of interest in relation to CRT but in isolation from one another. I think it's easy to generate a jumbled, meaningless model by indiscriminatingly "controlling" for co-variates like race, religiosity, and even gender when trying to asses the impact of ideologies and cultural worldviews, or to "control" for ideology when assessing the impact of worldviews or vice versa; people come in packages of these attributes, so if we treat them as "independent variables" in a regression, we aren't modeling people in the real world (more on this topic in some future post).

But just to satisfy those who are curious, I've also included a "kitchen sink" multivariate model of that sort. What it shows is that religion, race, education, and income all predict CRT independently of one another and independently of ideology and cultural worldivews. In such a model, however, neither ideology nor cultural worldviews predict anything significant for CRT.

3. Bottom line

So to sum up -- when we use CRT as the measure of how well people process information, there's no support for RBH. In fact, the zero-order effect for political-party affiliation is in the wrong direction. But the important point is that the effects are just too small to be of consequence -- too tiny to be at the root of the large schisms between people with differing ideological and cultural worldviews over issues involving policy-relevant science.

What does explain those divisions, I believe, is motivated reasoning, a particular form of which is what we are looking at in studies of cultural cognition.  

The lack of a meaningful correlation between CRT, on the one hand, and cultural worldviews and political ideologies, on the other, is perfectly consistent with this explanation for risk-perception conflicts, because the evidence that supports the explanation seems to show that motivated reasoning is ample across all cutural and ideological groups.

Indeed, motivated reasoning, it has long been known (although recently forgotten, apparently), affects both system 1 (heuristic) and system 2 (systematic reasoning).  Accordingly, far from being a "check" on motivated reasoning, a disposition to use system 2 more readily should actually magnify the impact of this sort of distortion in thinking.

That's indeed exactly what we see: as people become more numerate -- and hence more adept at system 2 reasoning -- they become even more culturally divided.

To be sure, being disposed to use heuristic reasoning -- or simply unable to engage in more technical, systematic modes of thought -- will produce all sorts of really bad problems. But the problem of cultural polarization over policy-relevant science just isn't one of them.

In my opinion, the sooner we get that, the sooner we'll figure out a constructive solution to the real problems of science communication in a diverse, democratic society.


Krugman acknowledges cultural cognition (at least in others!)

The point of the cool Justin Fox post that I noted yesterday now has been seconded by Paul Krugman, who says he already knew this -- that cultural cognition contrains public acceptance of scientific evidence -- based on the failure of his own columns to persuade people who disagree with him:

Justin Fox has an interesting post documenting something I more or less knew, but am glad to see confirmed: People aren’t very receptive to evidence if it doesn’t come from a member of their cultural community. This has been blindingly obvious these past few years.

Consider what the different sides in economic debate have been predicting these past six or seven years. If you got your views from, say, the Wall Street Journal editorial page, you knew – knew – that there was no housing bubble, that America in 2008 wasn’t in recession, that budget deficits would send interest rates sky-high, that the Fed’s expansion of its balance sheet would produce huge inflation, that austerity policies would lead to economic expansion.

That’s quite a record. And yet I’m well aware that many people – including people with real money at stake – consider the WSJ a reliable source and people like, well, me flaky and unbelievable. Much of this is politics, of course, but that’s intertwined with culture: the kind of people who turn to the WSJ, or right-wing investment sites can clearly see that I’m a latte-sipping liberal who probably favors gay rights and doesn’t worship the financially successful (I actually prefer good filter coffee, black, but that’s otherwise accurate), and just not part of their tribe.

I suppose that in my quest to improve policy and understanding I should be trying to fit in better – lose the beard, learn to play golf, start using “impact” as a verb. But I probably couldn’t pull it off even if I tried. And as a result there will always be a large group of people who will never be moved by any evidence I present.


Blind Voter-Candidate Matchmaking Site to Reduce Partisan Bias in Voter Perception?

I'm eager to hear your reactions to Elect Your Match!, a website that would blindly match voters to presidential candidates based on the similarity of their responses to a series of policy statements. The voters and candidates respond to the same series of statements on a scale of slightly/moderately/strongly disagree or agree. The statements are candidate generated: they each submit five statements on separate issues, and respond to their own and their opponents’ statements on the same scale as voters, indicating whether they slightly/moderately/strongly disagree or agree with each one. The statements would not mention candidate or party identity. In choosing these statements, candidates define the primary policy issues at stake in their campaign.

There are sites making very good efforts along these lines (mentioned in the article), providing thorough information and showing visitors how candidates relate to their stance issue-by-issue, as well as generating a match based on any range of issues the visitor selects. Elect Your Match! would simplify these models to route visitors through one short standardized questionnaire that sets forth the primary election issues, defined by the candidates themselves, and only recommending one comprehensive best-matching candidate. Simplifying the site's primary interface to give only one comprehensive match based on a preset agenda might make it easier and more appealing for those less engaged in politics, who may not have a sense of what issues are most important to them or to the election. In order for the site to provide a single candidate match based on a preset agenda, it is important that the candidates to themselves set the agenda defining the issues and provide their own responses, as opposed to a third-party determining the issues and rating the candidates’ positions. 

In addition to informing voters, a site like this could work to reduce partisan identity biasing voters' perceptions of candidates. I.e., Studies suggest that voters overestimate the extent that the positions of candidates sharing their partisan identity match their own policy preferences. In other words, voters erroneously “see their favorite candidates’ stands as closer to their own and opposing candidates’ stands as more dissimilar than they actually were.” Larry M. Bartels, The Irrational Electorate, The Wilson Quarterly (Autumn 2008). Or that voters more readily learn information about candidates that is congenial to their partisan identity, and discount facts that are not. Jennifer Jerit & Jason Barabas, Partisan Perceptual Bias and the Information Environment, Presented at the 2011 annual meeting of the Southern Political Science Association.

I’m curious about how a this advances the goals of the CCP: On one hand, it informs voters as to the candidate that really best matches their own outlook, and aims to minimize partisan identity-based bias in evaluating candidates. On the other hand, one seeking to advance the goals of CCP might desire a means for promoting more interpersonal deliberation (that could perhaps do more to update viewpoints and build consensus around polarizing issues in the election)(See also Bruce Ackerman & James Fishkin, Deliberation Day (2004)). As is suggested in the article, the site might have a deliberative component that allows interested visitors to browse more deeply than the primary questionnaire, to enter issue-specific segments of the site that would prompt them to interact with or respond to statements presenting arguments on either side of the issue. Perhaps these issue-specific segments could host an ongoing conversation posting visitors’ comments and responses to arguments on either side of the issue.


Cultural cognition & expert assessments of technological innovation

There's a great blog post by Justin Fox over at the Harvard Business Review's HBR Blog.

Fox argues that cultural cognition dynamics are likely to influence not only public perceptions of risk but also market-related assessments and decisionmaking within groups one might expect to be more focused on money and data than on meaning.

As illustration, he offers an amusing (for the reader) account of the reception afforded a recent column of his on expert assessments of technological innovation in the internet era.

I wrote a post here at on whether the Internet era has been a time of world-changing innovation or a relative disappointment. It was inspired by comments from author Neal Stephenson, who espoused the latter view in a Q&A at MIT. His words reminded me of similar arguments by economist Tyler Cowen (if I had enough brain cells to remember that Internet megainvestor Peter Thiel had been saying similarthings, I would have included him, too). So I wrote a piece juxtaposing the Stephenson/Cowen view with the work of MIT's Erik Brynjolfsson, who has been amassing evidence that a digitization-fueled economic revolution is in fact beginning to happen.

If I had to place a bet in this intellectual race, it would be on Brynjolfsson. I've seen the Internet utterly transform my industry (the media), and I imagine there's lots more transforming to come. But I don't have any special knowledge on the topic, and I do think the burden of proof lies with those who argue that economic metamorphosis is upon us. So I wrote the piece in a tone that I thought was neutral, laced with a few sprinklings of show-me skepticism.

When the comments began to roll in on, though, a good number of them took me to task for being a brain-dead, technology-hating Luddite. And why not? There's a long history of journalists at legacy media organizations writing boneheaded things about the Internets being an abomination and/or flash in the pan (one recent example being this screed by Harper's publisher John McArthur). Something about my word choices and my job title led some readers to lump me in with the forces of regression, and react accordingly.

When I saw that had republished my post, I cringed. Surely the technoutopians there would tear the piece to nanoshreds. But they didn't. Most of the commenters instead jumped straight into an outrage-free discussion of innovation past and present.

That's probably because, if there is one person in the world whom readers consider a "knowledgeable member of their cultural community," it is Neal Stephenson. This is the man who described virtual reality before it was even virtual, after all. I'm guessing that readers were conditioned by the sight of Neal Stephenson's name at the beginning of my post to consider his arguments with an open mind. Here at, where we don't require readers to have read the entire Baroque Cycle before they are allowed to comment, Stephenson was just some guy saying things they disagreed with.

Fox's assessment of the tendency of people to credit arguments of experts with whom they have a cultural affinity is consistent with our HPV study. But what's really cool is that the reaction of the readers shows how a group that might be culturally predisposed to reject a particular message will actually give it open-minded consideration when they see that it originates (or at least has received respectful and serious attention) from someone with whom they identify.

Anyway, I'm psyched to learn that Fox sees our methods and framework as relevant to the market-related phenomena he writes on -- not only because it's cool to think that cultural cognition can shed light on those things but also because I really loved his Myth of the Rational Market. Was tied (with The Clockwork Universe) for best book I read all of last yr!


A "frame" likely to generate consensus that climate change is not happening (and/or that geoengineering is safe)

Interesting piece, my guess is that this idea could actually end polarization over climate change -- by furnishing egalitarians and hierarchs alike strong emotional motivation to deny there's any danger after all! 

Also, although the author maintains that engineering humans is "safer" than geoengineering, my guess is that people would see geoengineering itself as less risky when they consider it in relation to "human engineering" than when they consider it on its own  -- precisely b/c human engineering is pretty much the creepiest thing that anyone can imagine.

Which isn't to say the author's argument is wrong on the merits!



More religion & CRT--where's ideology & CRT?!

Science this week published an article that finds low CRT predicts religiosity & that backs this finding up w/ experimental data:

It's a really excellent study. The experiments were ingenious. It should be pointed out, though, that this finding corroborates another excellent one, Shenhav, A., Rand, D.G. & Greene, J.D. Divine intuition: Cognitive style influences belief in God. Journal of Experimental Psychology (2011), advance online doi:10.1037/a0025391.

I'm waiting, patiently, for someone to publish some data on correlation between CRT & liberal-conservative ideology. As I've noted before, data that CCP has collected suggests that there is virtually none -- or that there are weak offsetting correlations between different cultural dimensions of conservatism (hierarchy & individualism).

The reason I'm waiting is that such data would contribute a lot to the increasing interest in the relationship between ideology & quality/style of cognitive processing (the Republic Brain hypothesis or "RBH," let's call it). Shane Frederick's CRT scale & Numeracy (which incorporates CRT) are the only validated indicators of the disposition to use systematic or System 2 reasoning as opposed to heuristic or system 1. So it would, of course, be super useful to see what the CTR verdict is on whether conservatives & liberals differ in processing.

Being patient while waiting is becoming more difficult. I've got to believe that such evidence is already in hand; given the interest in the RB hypothesis, surely someone (likely multiple people) have thought to try to test it w/ the CRT measure. It would be sad to discover that the reason the data haven't been reported is that they don't fit the hypothesis -- that is, don't show that liberals are more "systematic" or System-2 disposed in their thinking. 

Actually, I suppose I have data in hand, but at least I've blogged on them!

Oh-- if I'm wrong to think that this is a matter on which no one has yet presented data, please tell me and I'll happily acknowledge my error & share the relevant references w/ other curious people. 


Deliberations & identity formation

CCP member John Gasitil, along w/ co-authors, has a new article out presenting evidence that highly participatory forms of democratic deliberation promote a distinctive shared identity that transcends more particular and potentially divisive ones, such as those founded on cultural affiliations.

The analysis was largely qualitative: a case study based on impressionistic analyses of transcripts from citizen deliberations associated with the Australian Citizens' Parliament. I know JG has more data on the Australian Citizens' Parliament, including some that admit of more systematic analysis, in hand. Good way to do research since the convergence of results from more interpretive forms of empirical analysis and more quantitative -- if they do indeed converge! -- make the conclusions of both more worthy of being credited.

I know from experience that collective deliberations on baseball are not sufficient to enable Gastil to transcend his partisan cultural identity as a Tigers fan.

Felicetti, A., Gastil, J., Hartz-Karp, & Carson, L. Collective Identity and Voice at the Australian Citizens' Parliament. Journal of Public Deliberation 8, article 5 (2012):

This paper examines the role of collective identity and collective voice in political life. We argue that persons have an underlying predisposition to use collective dimensions, such as common identities and a public voice, in thinking and expressing themselves politically. This collective orientation, however, can be either fostered or weakened by citizens’ political experiences. Although the collective level is an important dimension in contemporary politics, conventional democratic practices do not foster it. Deliberative democracy is suggested as an environment that might allow more ground for citizens to express themselves not only in individual but also in collective terms. We examine this theoretical perspective through a case study of the Australian Citizens’ Parliament, in which transcripts are analyzed to determine the extent to which collective identities and common voice surfaced in actual discourse. We analyze the dynamics involved in the advent of collective dimensions in the deliberative process and highlight the factors—deliberation, nature of the discussion, and exceptional opportunity—that potentially facilitated the rise of group identities and common voice. In spite of the strong individualistic character of the Australian cultural identity, we nonetheless found evidence of both collective identity and voice at the Citizens’ Parliament, expressed in terms of national, state, and community levels. In the conclusion, we discuss the implications of those findings for future research and practice of public deliberation.



Ethical guidelines for science communication informed by cultural cognition research

People often express concern to me about the normative implications of research that identifies how cultural cognition influences perception of risk and related facts and how those influences can be anticipated in structuring science communication.

I am glad they are concerned, because I am, too. If I thought that people who consume our research did not reflect on such concerns, I'd be even more worried about what I do. Knowing that others see normative issues here also means that I can share with them my own responses & see if they think I've got things right &/or can do better.

Some "Guidelines" follow. But they are not really "guidelines" in the sense of a codified set of rules or standards (I'm skeptical, in fact, that anything morally complicated can be handled with such things). Rather, they are more like prototypes that when considered together reflect what for me seems the right moral orientation to our work.  Would be happy to receive & post additional "guidelines" of this nature (along w/ any commentary their authors wish to append) & also grateful to receive feedback from anyone who takes issue with any of these or with the attitude/orientation they are meant to convey.

1. No lying. No need for elaboration here, I trust.

2. No manipulation. Likely also self-explanatory, but an example might be useful. Consider how Merck tried to shape public opinion toward Gardasil, its HPV vaccine: by using secret campaign contributions to "persuade" a southern, religious, conservative politician -- Texas Governor Rick Perry -- to issue an executive order mandating vaccination of middle school girls.

It was fine for Merck to try to assure that parents would learn about the benefits of the vaccine. It wasn't even wrong for it to enlist communicators whose cultural identities would make them credible sources of sound information

But it should have been open that it was trying to engage people this way.

Obviously, the whole immoral plan blew up in Merck's face--actually generating distrust of Gardasil among a diverse range of cultural groups. Nice work, gun-for-hire, private-industry counterparts of those who study the science of science communication in order to promote the common good!

But the strategy would have been wrong even if Merck had gotten away with it because it was managing the information environment in a way that the message recipients would themselves have resented. They were using people's reasoning, not enabling people's reasoning.

3. Use communication strategies and procedures only to promote engagement with information--not to  induce conclusions. Some people say that cultural-cognition informed communication strategies are a form of "marketing." Fine, I say. So long as what's being marketed is not a preferred position on an issue of science & policy but rather a decisional state or climate in which people who want to make decisions based on the best available scientific information are most likely to take note of and give open-minded consideration to it. 

The HPV-vaccine disaster again supplies an example. Parents of all cultural worldviews want to have the best available information on how to promote the health of their children. It would be perfectly fine, in my view, for a communicator to use cultural cognition research to identify how to promote open-minded engagement with information on the HPV vaccine.  

So if public health officials self-consciously decided to rely on a culturally diverse array of honestly motivated science communicators in order to forestall creation of any perception that positions on the vaccine were aligned asymmetrically with cultural outlooks--that would have been okay.

Also would have been okay to have resisted Merck's stupid, market-driven decision to seek fast-track approval of a girls-only vaccine and to promote inclusion of it on the schedule of mandatory school vaccinations--a marketing strategy that made cultural polarization highly likely.  Parents who love their children wouldn't want to be put into a communication environment in which their honest assessment of the health needs of their daughters or sons would be distorted by culturally antagonistic meanings unrelated to health.

4. Use strategies and procedures to promote engagement only when you have good reason to believe that engagement fits the aims and interests of information recipients. Parents trying to decide what is the best health interests of their children want to engage the information from the mindset that best promotes an accurate assessment of the evidence. But sometimes people want to engage information in a way that reliably connects them to stances that fit their cultural style. Leave them alone; so long as they aren't hurting anyone else, they are entitled to manage their personal information environment in a way that promotes contact with their own conception of the good life.

5. Don't help anyone who has ends contrary to these guidelines. Like, say, a pharmaceutical company that in its drive to make a buck is willing to manipulate people by covertly inducing individuals they trust to vouch for the effectiveness and safety of some treatment.

6.  Do help anyone -- regardless of their cultural worldview -- who is genuinely seeking to promote reflective engagement with information when such engagement fits the interests and aims of recipients. Like, say, a pharmaceutical company that wants to make a buck by openly and without manipulation satisfying the interest that people have in being able to consider scientifically valid information about the effectiveness and risks of a vaccine. 


MPSA climate change panel: report & slides

On Friday I was on a Midwest Political Science Association panel on public opinion & climate change. I presented Tragedy of the Risk Perceptions Commons (slides here). 

Michael Tesler presented interesting data that he argued show that elite rhetoric and not motivated cognition accounts for political divisions on climate change. I have a hard time conjuring the psychological model that would see the two operating independently of each other; to me they are not discrete mechanisms, but steps in a process (elite cues help create/transmit the meanings that then motivate cognition for ordinary individuals) & I wasn't sure exactly how the data supported the inference, but I'm eager to see the write up, at which point I'll either get it or explain why I don't think he is right!

Alexandra Bass presented data on media content to show that values influence climate change perceptions. The presentation was great. But I have to say I don't really get media-content studies in general; they seem to draw inferences the validity of which depend on the ratio of frequency of content to frequency of events in the world--something for which the analyses never present any data. I didn't get a chance, though, to read Bass's paper, so I will, & see if that helps me.

Mathew Nowlin, a member of Hank Jenkins-Smith's amazing risk-perception group at theCenter for Applied Social Research at the University of Oklahoma, presented a cool paper on education, climate change knowledge, and politcal polarization.

Finally, Rebecca Bromley-Trujillo backed data out of the American National Election Study to support the hypothesis that "core political values"-- "such as equality"-- "are an important predictor of climate change attitudes, beyond other standard determinants of political attitudes, like partisanship or ideology." I found the cliam convincing, but I was admittedly predisposed to believe it.


Where is "what does Trayvon Martin case mean, part 3"?

It's coming soon. But not before I get done learning from my class what they think. I also learned a lot from Randy Kennedy's lecture at Leslie College last week. I hope he writes up his lecture so that others can think about his reflections as well (I'm sure I'll say more about Kennedy in "part 3").


Cultural cognition--plus lots of other relevant things-- & nuclear energy: experts *get it*

Came across a great blog on public perceptions of nuclear risk at the Neutron Economy & then found a thoughtful reaction to it at Areva North America: Next Energy Blog.

In addition to being well-crafted and informative, the posts were immensely heartening.

Written by and for people who do work relating to nuclear energy, both displayed keen awareness of the science of public risk perceptions and science communication. (Cultural cognition was  featured, but was--very appropriately--not the only dynamic that was addressed.)  

What's more, rather than the frustrated hand-wringing and finger-pointing that experts (and many others) often (understandably but not helpfully) display when confronted with public controversy over risk, both evinced an uncomplaining, matter-of-fact dedication to making sense of how the public makes sense of the world.

From Neutron Economy:

To summarize - providing education and facts are good, useful even - but on their own insufficient without presenting those facts in a context which engages with the deeply-held values of the audience. To produce actual engagement - and even inducement to support - requires a producing a context of facts compatible with the values of those one is trying to reach. In other words, for the case of nuclear, it means going beyond education and comparative evaluation of risk (again, to emphasize, both of which are valid in and of themselves) and placing these within the framework of how this speaks to the values of the audience....

[I]it is the job of the nuclear professionals (as members of the "technical community") to do our best to provide an accurate technical framework for these evaluations of risk by the public, such that they can make the most sound decisions on risk. Meanwhile it is the job of nuclear communicators and advocates to speak to values, as to produce more fair evaluations of both the benefits and risks of nuclear, particularly in the context of available energy choices.

From Areva North America: Next Energy Blog

So, “pure” facts don’t tend to change our minds very often. And surprisingly, presenting facts alone when encouraging a new perspective can often result in the opposite effect on people who disagree....

Which naturally leads to our next question, “If cultural influence is so strong on perceiving facts, is trying to educate people of the beneficial facts about nuclear energy hopeless?”

We agree with Steve’s answer, “Not at all.”

But the key is to frame our factual and technically accurate answers within the cultural framework understanding of those we are trying to engage.

Reading these words made me believe that it is not at all unrealistic to anticipate that the practice of science will in the not too distant future be happily and productively integrated with the science of science communication.


Is evoking emotion a means of communicating "factual information" on risk and the like? The Wittlin test

I would say "yes, so long as..." and then launch into a long, abstract account of emotion as a form of cognitive perception that is uniquely suited to apprehending the significance of information for goods a person values (see Damasio, Descartes' Error; Nussbaum, Upheavals of Thought) but that is also vulnerable to bias and hence manipulation, blah blah...

Maggie Wittlin, however, has sent me an email that convinces me there is a much simpler answer: unconditionally"yes" or unconditionally "no" depending on what the emotional appeal is about and what the cultural worldview is of the person answering the question! 

Two recent cases (one argued today) seem to be asking the question: are images that cause strong emotional reactions toward the subject matter informative?  Or are they mere advocacy?  I think you'll get two different answers based on (1) whether you ask and egalitarian or a hierarch (serious individualists might be consistent) and (2) which case you ask about:

On the right, we have the Texas sonogram case, where CJ Edith Jones writes, "Though there may be questions at the margins, surely a photograph and description of its features constitute the purest conceivable expression of 'factual information.' If the sonogram changes a woman’s mind about whether to have an abortion -- a possibility which Gonzales says may be the effect of permissible conveyance of knowledge, Gonzales, 550 U.S. at 160, 127 S. Ct. at 1634 -- that is a function of the combination of her new knowledge and her own 'ideology' ('values' is a better term), not of any 'ideology' inherent in the information she has learned about the fetus."

On the left, we have the challenge to the FDA cigarette warning label regulations, where "Stern also argued today that smokers do not fully understand tobacco’s harmful effect on health. The images, he argued, communicate the risk of smoking more effectively than do text warnings."  On the other hand, "Noel Francisco, representing R.J. Reynolds Tobacco Co. in the dispute, said the labels cross the line from fact-based to issue advocacy. The government is triggering a negative emotional reaction."




What does the Trayvon Martin case mean? What *should* it mean? part 2

In part 1, I argued that what the Trayvon Martin case means won’t turn on what the facts are found to be.

On the contrary, what we understand the facts to be will turn on what the case means to us as members of one or another cultural group.

Public reactions to the case display the characteristic signature of cultural cognition--the tendency of people to fit the perception of legally consequential facts to their group commitments.

The influence of cultural cognition explains why people with different outlooks and identities are forming such strong and divergent understandings of what happened despite their having almost no clear evidence to go on.

And it predicts (on the basis of experimental studies) that they are likely to continue to be divided just as bitterly no matter how much evidence comes to light—even if it turns out, say, that an unobserved neighbor made a digital recording of the attack with his or her cell phone (or high-resolution camera).

But as I said in my last post, this conclusion doesn’t mean there’s no point talking about the case. We should be addressing the meanings that divide us on an issue like this, because they divide us on lots of things—not just the use of violence by individuals of one race on those of another, or even the use of it by the police against private citizens, but also matters as diverse as whether climate change is occurring or whether schools should vaccinate pre-adolescent girls against HPV.

This sort of division, in my view, is a barrier to our coming to democratic consensus on a wide variety of policies that promote our common welfare in ways perfectly compatible with our diverse cultural values.

The question, in my view, is how we might use the Trayvon Martin case as an occasion for a meaningful discussion about meanings in our political life.

In this post, I’ll identify how not to do it.

2.  Replaying history: “shall issue,” “stand your ground,” and the culture of honor 

It turns out that we have been “discussing” cultural meanings since pretty much the start of this affair. But we’ve been doing it in the idiom of culturally motivated empirical assertions about the impact of law.

Two laws, in particular—one relating to guns and the other to the use of self-defense.

Florida is one of the 38 states with so-called “shall issue” laws, which essentially mandate that any adult citizen who has not been convicted of a felony or diagnosed with a mental illness be issued a permit to carry a concealed firearm in public.

It is also one of a dozen or states that has recently enacted “stand your ground” laws, which provide that a person “who is attacked in any  [public] place where he has a right to be has no duty to retreat” before resorting to deadly force to defend him- or herself from a potentially lethal assault. (Media reports miscalculate the number—apparently counting laws that existed before the recent spate of “stand your ground” enactments and also mixing in ones that relate to the use of deadly force in the home.)

George Zimmerman, the shooter in this case, was carrying a concealed handgun pursuant to a “shall issue” license. He also asserts that his fatal shooting of Martin—whom Zimmerman was tailing because he looked “suspicious”—was an act of self-defense.

Unsurprisingly, there has been a barrage of commentaries attributing violent assaults to “shall issue” and “stand your ground” laws, and a counter-barrage crediting these laws with reducing the incidence of violent crime.

These empirical arguments are specious. Indeed, they are part and parcel of a longstanding cultural division in our political life. Zealots who crave (or indeed profit from) such debate are exploiting the Trayvon Martin case to deepen that division—crowding out discussion of things that really matter.

a. The evidence. There is no persuasive empirical evidence that “shall issue” laws have any impact on the rate of violent crime.

Don’t take my word for it: that's the conclusion the National Academy of Sciences reached in an “expert consensus” report, which examined numerous empirical studies on the matter and concluded that it was simply impossible to say one way or another whether such laws increase crime or instead decrease it as a result of their effect in deterring violent predation.

The evidence on how “stand your ground” laws have affected violent-crime rates is no more conclusive. Indeed, it’s hard to conceive of how it could be.

These laws have all been enacted in the last decade. Yet the rule that a person can “stand his ground”—that he has no duty to retreat before using deadly force in self-defense—has been the majority rule among U.S. states for over a century. It was already the rule, in fact, in many of the states that have recently adopted “stand your ground” laws (e.g., Georgia, Indiana, Kentucky, Montana, Oklahoma, Utah, Washington, and West Virginia).

Before it enacted its “stand your ground” law, Florida apparently did make the lawful use of deadly force in self-defense conditional on a duty to avail oneself of any safe route of retreat, at least when an individual was attacked outside his or her home. But violent crime has decreased in that state over the the last decade.

Indeed, violent crime has decreased throughout the U.S. during that time. Identifying all the potential causes for this trend, and disentangling them from one another in order to determine what impact (if any) enacting or not enacting a “stand your ground” law has had on the velocity of crime abatement in any particular state, would involve overcoming all the statistical difficulties that led the National Academy of Sciences to toss its hands up in the air when it tried to measure the impact of “shall issue” laws on violent crime.

Any commentator who asserts with confidence that either “stand your ground” laws or “shall issue” laws increase or decrease crime simply doesn’t know what he or she is talking about.

b. Culture, cognition, and political opportunism. What there is persuasive empirical evidence of, however, is the biasing impact of cultural cognition on individuals’ assessments of the impact of laws like these.

Individuals with egalitarian, communitarian values—for whom the gun is a noxious symbol of patriarchy, racism, indifference to others, and hostility to reason—predictably construe the evidence as showing that lax gun control laws increase deadly violence.

In contrast, those with hierarchical and individualistic worldviews—for whom the gun is associated with positive values such as courage, self-reliance, and honor—predictably fit their perceptions of the evidence to the culturally congenial conclusion that shall issue laws decrease homicide rates.

As a result of these same dynamics, moreover, they both tend to misperceive that the weight of expert evidence is on their side.

The same cultural divisions mark reactions to the duty to retreat in self-defense laws. Indeed, the advent of the “stand your ground” movement is intimately connected to cultural conflict over guns.

As indicated, the motivation for these statutes wasn’t to change the law. On the contrary, it was to provoke culturally grounded conflict.

The biggest threat to the gun industry is not that guns will be regulated out of existence. It is that future generations of Americans, as they become progressively more removed from the cultural norms that motivate people to buy guns, will simply lose interest in owning them.

Orchestrated by the NRA, the campaign to enact “stand your ground” laws is a booster shot for those norms. By design, “stand your ground” laws radiate individualistic and hierarchical values. The enactment of them—particularly over the predictable, and predictably strident, opposition of groups associated with egalitarian and communitarian values—broadcasts the vitality of a pro-gun ethos, a signal that can be expected to inculcate the same in those who receive that signal.

c. We’ve seen this before; enough already! The cultural battle over “stand your ground” laws is actually an historical replay.

Just over a century ago, courts in the South and West adopted the “no retreat” rule. They called the “true man” doctrine, a label that recognized that a man whose character is “true” (that is, in order, or straight, like a “true beam”) appropriately values his own liberty more than the life of someone who wrongfully threatens it.

Northeastern jurists and commentators denounced this departure from the traditional “retreat to the wall” position as an expression of the “feeling which is responsible for the duel, the war, for lynching.” The echo of the Civil War reverberated through this legal debate for a period for some three decades.

Then, in one of the most brilliant demonstrations of statesmanship in the history of America jurisprudence, Justice Holmes defused this controversy by draining it of its expressive significance.

It’s futile, he reasoned in the 1921 decision of Brown v. United States, for the law to demand that someone who faces a deadly threat “pause to consider whether a reasonable man might not think it possible to fly with safety.” “Detached reflection cannot be demanded in the presence of an uplifted knife."

Just like that, the “true man doctrine” became the “scared shitless man defense.” The South and the West got the rule they wanted, but only after it had been gutted of the meaning that galled the Northeast.

Everyone lost interest, and the issue went away. Gun control essentially took its place as the front of the battle over the status of honor norms in U.S. law and culture.

But then 85 years later the NRA came to the brilliant realization that it could subsidize the culture war over guns by reviving the “true man” doctrine in the form of the new, Clint-Eastwoodesque  “stand your ground” laws. 

Not surprisingly, the most receptive states were located in regions of the country that already had the “true man” doctrine.

But no matter: the point wasn’t to change the law; it was to agitate and inflame.

The NRA could count on agitation, of course, only if the egalitarian communitarian opponents of the honor culture—the descendents of the “true man” critics—took the bait. Which of course, they have done. They'd be out of work too without this sort of conflict.

Hey—I didn’t know him. But I think I can safely say, “You are no Justice Holmes,” to the legions of commentators now seizing on the Trayvon Martin as an occasion to raise the volume in equally tendentious and tedious “shall issue” and “stand your ground” debates.

I’d also like to tell them to just back off.  Not only are you needlessly sowing division; you are destroying the prospects for a meaningful conversation of the values that—despite our cultural differences—in fact unite us.


Dan M. Kahan, The Secret Ambition of Deterrence, 113 Harv. L. Rev. 413 (1999).

Dan M Kahan & Donald Braman, More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions, 151 U. Pa. L. Rev. 1291 (2003). 

Dan Kahan, Donald Braman, Geoffrey Cohen, John Gastil & Paul Slovic, Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition, 34 Law Human Behav 501 (2010).

Dan M. Kahan, Donald Braman, John Gastil, Paul Slovic & C. K. Mertz, Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception, 4 J. Empirical Legal Studies 465 (2007).

Dan M. Kahan, Hank Jenkins-Smith & Donald Braman, Cultural Cognition of Scientific Consensus, 14 J. Risk Res. 147 (2011).

Dan M. Kahan, The Cognitively Illiberal State, 60 Stan. L. Rev. 115-54 (2007).


Another cool book: van Rijswoud, Public faces of science

Found another really great book on-line:

Erwin van Rijswoud, Public faces of science: Experts and identity work in the boundary zone of science, policy and public debate (Radboud University Nijmegen, 2012).

It's actually van Rijswould's doctoral dissertation.

But anyway, the work examines Dutch scientists' impressions of how their work and expertise were received in various public policy debates, including ones on H1N1 vaccination, flood control, and HPV vaccination of adolescent girls.

The analyses are based on "biographical narrative." At the beginning of the work, he explains this method, which involves analytically motivated synthesis of interviews with the scientists, supplemented with other materials, and presented in a form that uses story-telling elements not typical at all for social science work (unlike typical ethnography, the voice is much more internal, almost "first person"). 

I was really interested in vR's discussion of HPV, an issue the CCP group has also studied. I hadn't realized that the issue was controversial in the Netherlands, too (likely I should be embarrassed to say that). I did know that England didn't have any trouble implementing a national immunization program, so there are definitely some great lessons to be learned through comparative study.

Also hadn't realized that there was political dispute over expert flood control advice in the Netherlands. Actually, efficient flood management in Holland & other regions of the country is often offered as an example of what the successful integration of science into policymaking is supposed to look like!

Thanks to van Rijswoud & Radboud University for making his work widely available & at no charge!


What does the Trayvon Martin case mean? What *should* it mean? part 1

If one were to judge from the media coverage—the dueling depictions of the characters of the shooter and his victim; the minute dissections of fragmentary witness statements; the “expert” voice-identification of screams picked up in the background of a 911 call; the high-resolution scrutiny of  low-resolution of video footage of the shooter in police custody that reveal the existence/absence of telltale wounds—one would think that the significance of the Trayvon Martin case turns (or ultimately will turn) decisively on the facts.

In actuality, the opposite is true: the significance we attach to the case will determine our perception of the facts; and because what it signifies turns on cultural meanings that divide our society, the members of different groups will form highly opposed understandings of what happened that terrible night.

Does that mean it’s pointless to be discussing the case?

On the contrary. In my view, the public agitation the case has provoked is evidence of how important it is for us to have a public conversation about the diversity of our cultural outlooks and their relation to law, and that this case is an ideal occasion for addressing that issue.

But if we insist that the discussion take the form of competing, culturally partial (and even culturally partisan) renditions of the facts, we are highly unlikely to engage the real issues in a universally meaningful way. And in that circumstance, we can be sure that the sources of agitation will persist.

I have more to say than it makes sense to put in one post.  So regard this as installment 1 of 3.

1. Meanings are cognitively prior to fact

The Trayvon Martin case, polls unsurprisingly reveal, divides people along cultural lines.

In this sense, it is very much like a host of other high-profile types of cases: public altercations leading to a mixed-race killing (think Bernard Goetz and Howard Beach); the slaying (or mutilation; think Lorena Bobbitt) of sleeping men by female partners who allege chronic abuse; the prosecutions (William Kennedy Smith)—or not (Duke lacrosse)—of men alleged to have disregarded women's verbal resistance to sexual intercourse; forceful arrests of political protestors (Occupy Wall Street; Operation Rescue) pepper sprayed by police—or of fleeing drivers whose bodies are broken by the impact of their crashing cars (Scott v. Harris) or the fusillade of baton blows of their pursuers (Rodney King).

CCP has conducted experimental studies of cases like these. What we have found, in all of these contexts, is that people unconsciously form perceptions of fact that reflect their stance on the cultural meanings the cases convey.

Those committed to norms of honor and self-reliance, on the one hand, and those who value equality and collective concern, on the other; those who believe women warrant esteem for mastery of traditionally female domestic roles and those who believe women as well as men should be conferred status for success in civil society; those who place a premium on respect for authority and those who apprehend the abuse of it as a paramount evil—all see different things in these types of cases, even when they are forming their perceptions on the basis of the same evidence.

Moreover, members of all these groups know that what one sees (or claims to see; each group always suspects the other of disingenuousness) depends on who one is culturally speaking.

As a result, in controversies over these sorts of cases, those on both sides come to view competing factual claims as markers of opposing allegiances.  The ultimate resolution of these facts in courts of law, in turn, becomes evidence of who counts and who doesn’t in an our society.

These are identity-threatening conditions. It is the extreme anxiety that they provoke that explains how despite knowing next to nothing about what actually happened—because we have nothing more to go on than factual snippets embroidered with righteous denunciation in the media, or antiseptic renditions of the “facts of the case” in appellate reporters—we nevertheless become filled with passionate certitude about the events. The discovery that others disagree with us fills us with incredulity and rage.

And most extraordinary of all, this same environment of symbolic status competition explains why such disagreement persists in the face of the most compelling forms of evidence of all. Even when we literally see the events with our own eyes—as we do when they are recorded on video, e.g.—cultural cognition assures that we will disagree about we are seeing

We will disagree, in such instances, with those who hold values different from ours when we watch what we understand to be the same event.

Moreover, we will disagree with those who share our values if, as a result of a hidden experimental manipulation, we start with different impressions of the sort of event (abortion-clinic protest, or anti-war protest) we are watching.

Barely detectable above the cacophony in the Trayvon Martin case are a few lonely voices cautioning us not to jump to conclusions. We don’t really know enough about what happened, they rightly point out, to form such strong opinions.

But the truth is, we’ll never know what happened, because we—the members of our culturally pluralistic society—have radically different understandings of what a case like this means.

The questions are whether it makes sense to talk about that, and if so, what should we be saying?


Dan M. Kahan & Donald Braman, The Self-defensive Cognition of Self-defense, 45 Am Crim Law Rev 1 (2008).

Dan M. Kahan, The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 126 Harv. L. Rev. 1 (2011)

Dan M. Kahan, Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases, 158 University of Pennsylvania Law Review 729 (2010).


Dan M. Kahan, David A. Hoffman, Donald Braman, Danieli Evans & Jeffrey J. Rachlinski, They Saw a Protest: Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. (forthcoming 2012).

Mark Kelman, Reasonable Evidence of Reasonableness, 17 Critical Inquiry 798-817 (1991). 



Cultural theory of risk: it's not just about clean air & water

It's remarkable and heartening to see how widespread the influence of the cultural theory of risk has become. 

Here are three recent examples of articles that assess the importance of the cutural predispositions for risk and science communication, none of which is about traditional environmental concerns:

  1. Griffiths, M. & Brooks, D.J. Informing Security Through Cultural Cognition: The Influence of Cultural Bias on Operational Security. Journal of Applied Security Research 7, 218-238 (2012).

    Cultural bias will influence risk perceptions and may breed “security complacency,” resulting in the decay of risk mitigation efficacy. Cultural Cognition theory provides a methodology to define how people perceive risks in a grid/group typology. In this study, the cultural perceptions of Healthcare professionals to access control measures were investigated. Collected data were analyzed for significant differences and presented on spatial maps. The results demonstrated correlation between cultural worldviews and perceptions of security risks, indicating that respondents had selected their risk perceptions according to their cultural adherence. Such understanding leads to improved risk management and reduced decay of mitigation strategies.

  2. Daniel J. Decker, W.F.S., Darrick T. N. Evensen, Richard C. Stedman, Katherine A. McComas,Margaret  A. Wild, Kevin T. Castle, and Kirsten M. Leong. Public perceptions of wildlife-associated disease: risk  communication matters. Human Wildlife Interactions 6, 112–122 (2012).

    Wildlife professionals working at the interface where conflicts arise between people and wild animals have an exceptional responsibility in the long-term interest of sustaining society’s support for wildlife and its conservation by resolving human–wildlife conflicts so that people continue to view wildlife as a valued resource. The challenge of understanding and responding to people’s concerns about wildlife is particularly acute in situations involving wildlife-associated disease and may be addressed through One Health communication. Two important questions arise in this work: (1) how will people react to the message that human health and wildlife health are linked?; and (2) will wildlife-associated disease foster negative attitudes about wildlife as reservoirs, vectors, or carriers of disease harmful to humans? The answers to these questions will depend in part on whether wildlife professionals successfully manage wildlife disease and communicate the associated risks in a way that promotes societal advocacy for healthy wildlife rather than calls for eliminating wildlife because they are viewed as disease-carrying pests. This work requires great care in both formal and informal communication. We focus on risk perception, and we briefly discuss guidance available for risk communication, including formation of key messages and the importance of word choices.

  3. Kaklauskas, A., et al. Passive house model for quantitative and qualitative analyses and its intelligent system. Energy and Buildings (in press), on-line publication available at

    The passive house, along with models of its composite parts, has been developed globally. Simulation tools analyze its energy use, comfort, micro-climate, quality of life and aesthetics as well as its technical, economic, legal/regulatory, educational and innovative aspects. Meanwhile the social, cultural, ethical, psychological, emotional, religious and ethnic aspects operating over the course of the existence of a passive house are given minimal attention or are ignored entirely. However, all the aspects mentioned must be analyzed in an integrated manner during the time a passive house is in existence. The authors of this article implemented this goal while they participated in two Intelligent Energy Europe programs, the Northpass and the DES-EDU projects. The Passive house model for quantitative and qualitative analyses and its intelligent system was developed during the time of these projects. The model and intelligent system are briefly described in this article, which ends with a case study.


The only thing that bothers me about this: I'd *never* write a 3-paragraph abstract

from Legal Theory Blog (April 1, 2012)...


Kahan on Cultural Metacognition

Dan Kahan (Yale Law School, Cultural Cognition Project) has posted Cultural Metacognition on SSRN. Here is the abstract:

    My concern in this Article is to explain the epistemic origins of theoretical disagreement in the study of law. Scholars who agree that the proper object of legal theory is to provide a correct account of the normative and positive foundations of law are still likely to disagree—intensely—about what theories will best achieve these ends. Does fairness or welfare best capture the normative point of law? Are judicial decisions best explained by the strategic interactions of legal officials (e.g., judges, presidents, senators) or are they explained by the norms of legal institutions and the explicit content of legally authoritative texts? Is the effect of tort law best predicted by neoclassical economics or by behavioral economic models? Disagreement about the correct answers to these questions is pervasive among legal theorists. 

    At first glance, it might seem that such disagreement doesn’t really require much explanation. Theoretical disagreements might be the result of incomplete evidence and the relatively early stage of development of relevant disciplines. As evidence accumulates and theories are refined, we might expect convergence in legal theory. But it turns out that this picture is as simplistic as it is intuitively attractive. Theoretical beliefs on seemingly unconnected subjects (the adequacy of rational actor models in predicting the effect of tort rules and the question whether preference-satisfaction provides ultimate value standards) tend to cohere in familiar ways. Patterns like this do not occur by chance. Instead, they are explained by what I call "cultural metacognition"--the systematic operation of cultural commitments at the metacognitive (or "theoretical") level. 

    This Article then develops an important application of the theory of cultural metacognition: metacognitive beliefs are themselves the product of cultural cognition. The Article reports the results of a pilot study that investigates the relationship between cultural evaluation of metatheoretical frameworks (or "meta-archetypes") and second order theoretical beliefs (beliefs about the truth or soundness of first order theory statements). The research reveals that relevant cultural differences between two distinct institutionally-structured micro-communities (one clustered in southern Massachusetts and other clustered in southern Connecticut) explain differences in the acceptance of cultural cognition as a first-order theoretical framework. The broad implications of this result for legal theory and metatheory are then explored.


Trend in conservative distrust of scientists: what does it mean? 

So I was lucky enough to have a person who was curious to know what I thought draw my attention to Gordon Gauchat "Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010," published on-line today in the American Sociological Review.

Gauchat analyzes 35 yrs of responses to the General Social Survey item that measures how much "confidence" the public has in "the scientific community" and finds that the spread between liberals and conservatives has been widening in the last 15 years or so.  Indeed, before that, there really wasn't any gap to speak of.

Gauchat had to make some judgment calls about how to carve up his data: e.g., whether & how to aggregate responses to the GSS item (which uses a crappy three-point response measure: "great deal of confidence," "only some" or "hardly any"); how to deal with the shifting proportion of respondents identifying as "liberal" or "conservative" over the time period; whether & how to try to break the data up into discrete time periods in order to assess trends (I suspect people who do time series work might take issue with his strategy); and what variables to include as "controls" in multivariate regressions.

But I think it's clear that the trend he points to is there. And that it's interesting -- indeed, thought provoking.

Here are some thoughts the paper has provoked in me:

1. A tale of two trends. The trend that Gauchat identifies looks pretty similar to the one that public opinion surveys identify in views on climate change. That issue started to polarize people on political/ideological lines sometime close to when conservatives and liberals started to disagree on the GSS "confidence" or "trust in science" item. Compare Gauchat's Figure 1 (which I've cropped at around the point when the trend he identifies starts; the uncropped Figure is in the inset to the right) with a couple of Figures that I've taken from Dunlap, R.E. & McCright, A.M. A Widening Gap: Republican and Democratic Views on Climate Change. Environment 50, 26-35 (2008), who summarize Gallup polling on climate change during this period:


2. Three possible meaningsI'm conjecturing, of course, but I suspect that these two trends are in fact linked.  Whether they are is something that would have to assessed with more evidence, of course. And even more important, such assessing would have to be informed by some sort of hypothesis about what the link consists in. Here are three possibilities:

a.  The "confidence" item doesn't mean what it says -- it means "how do you feel about climate change?" One possibility is that the political polarization on responses to the GSS item that started in the 1990s is just an indirect measure of the politicization of climate change. That is, as climate change became more salient as a partisan issue, the question "how much confidence do you have in the scientific community" started to bear a politicized resonance that generated the same pattern of responses. On this view, "how confident are you in scientists" is essentially just an indicator of a latent attitude toward climate change. It's also a relatively weak indicator: it doesn't provoke as much division, in fact, as the climate change issues (in Gauchat's Figure, the y-axis is the fraction of conservatives or liberals who selected "great deal of confidence" vs. "only some" or "hardly any" combined).

If conservatives (or a significant number of them) are translating the question "do you trust scientists" into the question "what do you think about climate change," moreover, then the answer isn't a very reliable indicator of how conservatives feel about scientists in general or in nonpoliticized settings.

b. The item means what it says -- and measures the cost that climate change has imposed on the credibility of scientists with conservatives. Alternatively, conservatives are answering the question they are being asked -- and the thing that has caused them to become less trustful generally of science is the climate change controversy. That would be very sad.

c.  The item means what it says -- and is the source of climate change politicization. The final possible explanation for the linked trends (or the final one I can think of right now) is that the GSS item measures a genuine and growing distrust of scientists among conservatives  by conservatives and that growing distrust is itself what caused conservatives to become distrustful of climate change science in the mid to late 1990s.  

That strikes me as the least plausible explanation, actually. Why did conservatives just happen to get distrustful of scientists at that very moment?  

Indeed, Gauchat's study would have lent more support to the hypothesis that some dispositional distrust of science is the cause of conservative resistance to climate-change science if he had found  that conservatives distrusted scientists well before evidence of climate change started to accumulate. Because conservatives weren't more distrustful of scientists than liberals before the mid 1990s, his data actually undercut the assertion that conservatism is associated with anti-science or closed-minded reasoning styles.

Or so it seems to me; am eager to see how others react. Particularly Chris Mooney, a thoughtful proponent of the "asymmetry thesis" (AT) (i.e., that Republicans or conservatives are more vulnerable to motivated reasoning than Democrats or liberals).  Gauchat sees Mooney's earlier "Republican War on Science" (RWoS) thesis -- that Reagan & the Bush Presidencies launched partisan attacks against the scientific community -- as corroborated by his data. But that actually raises the question whether RWoS and AT are consistent! 

3. Some additional puzzles if one is trying to make sense of political orientations and dispositions toward science.

a. Liberals have historically "distrusted scientists" on environmental risks. It is a staple of the scientific study of public risk  perceptions that "distrust" of science predicts concern over environmental risks -- most prominently, as a historical matter, nuclear waste disposal. Historically, too, the left (liberals, and in cultural theory egalitarians) have been most distrustful of scientists in connection with those issues. More evidence that "distrust of scientists"  is often not what it seems -- a general distrust of scientists -- but a (weak) indicator of some general orientation toward the risk-issue du jour.

b. Moderates distrust scientists the most! Gauchat is interested, understandably, in the growing division between conservatives and liberals in the last 15 or so years. But across the entire three-decade period of the study, the group most distrustful has been self-described moderates.

Moreover, historically, more people characterized themselves as "moderates" than as either "liberals" or "conservatives." Conservatives, then, have historically been more trusting than most ordinary, non-partisan citizens.

Recently, conservatives have been increasing and now have basically "caught up" to moderates. Well, because moderates are the most "distrustful," the migration of "moderates" to "conservative" could be expected to increase the proportion of "conservatives" who are "distrustful" on the GSS item.

4. What's the story with religion? It's got to be a different one. 

Gauchat also finds that there is a parallel increase in distrust associated with religiosity (measured by church attendance). Of course, that religiosity would predict distrust (or lack of confidence) in scientists is not so surprising (not that I think this is inevitable!).  But it isn't obvious that such distrust would have increased over this period.

Gauchat's analysis, moreover, doesn't really make it obvious to me why it occurred. I read Gauchat himself as seeing the trend associated with religion as being of a piece with -- as having the same source, essentially -- as the trend associated with conservativism and distrust of science (viz., Mooney's  RWoS thesis).

But in fact, Gauchat's statistical analysis suggests that the association between religiosity and distrust of science occurred independently of the trend involving conservatism and distrust (he doesn't report any interactions between ideology and church attendance). That is, if one was a regular church goer, one became less trustful of scientists over the time period in question whether one was liberal, moderate, or conservative. Did Reagan and Bush cause liberal church goers to become anti-science too!? 

I suppose the climate change controversy could be making even highly religious liberals and moderates more distrustful of science -- although in fact, I would be super surprised if this is so, since I know from my own research that highly religious egalitarians are the most concerned of all about climate change risk!

So -- I dunno what's going on. Which I don't mind so much; one can't experience the pleasure of seeing a mystery solved if one is never perplexed.

(This is an aside, but treating religion and ideology as independent variables in a model like this is arguably a bad idea, since religion and conservative ideology are probably common indicators of a latent disposition that predicts science distrust and attitudes toward environmental risks more generally. If they are, the regression estimates for each influence controlling for the other will be unreliable. I will likely post something on the vice of "over-controlling" in studies that try to identify latent dispositional influences on risk perceptions sometime! In any case, it is clear from the raw data that Gauchat's finding on conservatism is not by any means an artifact of this modeling strategy.)

* * * 

As I said, thought-provoking study -- one that will make people smarter as they share their reactions to it.

Nice work!



Empirical evidence that liberals misconstrue empirical evidence to suit their ideology

It can be found in all the blog and media reports that construe our CCP studies as empirical proof that "conservatives" are uniquely vunlerable to biased readings of empirical evidence.

I know that some researchers and informed observers hypothesize that motivated reasoning is more strongly associated with conservatism than with liberalism.  I've explained (multiple times) why I am not persuaded -- but noted, too, that the issue is one that admits of empirical study by those who are intellectually curious about it.

I'm not that interested in spending my own scarce research time trying to definitively resolve the "asymmetry" question. For, as I've explained, I think that existing studies, including ours, establish very very convincingly that there is a tendency toward biased assessments of empirical evidence across the ideological spectrum (or cultural spectra), and that that problem is more than big enough to be a concern for everyone. Being persuaded of that, I myself would rather work on trying to figure out how this dynamic --which interferes with enlightened self-government and thus harms us all -- can be mitigated.

I have no quarrel with anyone who, after thoughtful and fair-minded engagement with our studies and our interpretations of them, comes to the conclusion that our findings support inferences different from the ones we make on the basis of our data. In fact, I am eager to learn from any such person. 

But for the record, I very much do resent it when I am misdescribed as having drawn conclusions I have not drawn by people who have not even read our work (much less misread it because of the sort of "team sports" mentality -- & outright contempt for others-- that obviously drives reporting like this and this).

And I resent it just as much when the dumb & intollerant person doing the mischaracterizing is a conservative who is chortling over a simplistic misreading of our work that supposedly shows that people with liberal views are stupid.

But so as not to leave readers of this post with a biased sampling of the evidence about people's capacity to engage in impartial assessment of empirical evidence, there are also manymany, manymany thoughtful observers of diverse political orientations who get that the pathology of motivated reasoning doesn't discriminate on the basis of ideology.


Two channel solution to the science communication problem (slide show)

I gave a presentation today at Harvard Business School in connection with a seminar co-taught by Richard Freeman and Vici Sato on economics of science & innovation. Got lots of great questions & reactions.

The talk (particularly toward the end) describes a "two channel communication strategy" as a device for counteracting the distorting effect of cultural cognition.

The idea is that ordinary citizens process information about policy-relevant science along two channels. The first  (Channel 1) transmits the content of such science -- that is, the conclusions it supports about how the world works and how it can be made to work better. The second (Channel 2) conveys the cultural meaning of that information -- and in particular whether assenting to the validity of it coheres with a person's defining group commitments.

Science communication can be effective only if the messages transmitted on both channels mesh with one another. If the information being transmitted along Channel 2-- the meaning channel -- threatens a person's cultural identity, then various mechanisms of cultural cognition will block out receipt of the content being transmitted along Channel 1, no matter how clear that information is. If the meaning signal is culturally congenial, however, then ordinary individuals will give it open-minded consideration even if it is contrary to their culturally grounded prior beliefs.

Our study on message framing and geoengineering supplies empirical support for using the two-channel model to reduce cultural polarization over climate change science.

In the talk, I present evidence from that study, but I also connect the two-channel strategy more systematically to a general model of how cultural cognition interacts with all manner of information processing.  Will likely write up a paper along those lines in near future.

For now-- slides