follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Is cultural cognition a bummer? Part 1

Now & again I encounter the claim (often in lecture Q&A, but sometimes in print) that cultural cognition is wrong because it is too pessimistic. Basically, the argument goes like this:

Cultural cognition holds that individuals fit their risk perceptions to their group identities. That implies it is impossible to persuade anybody to change their minds on climate change and other issues—that even trying to reason with people is futile. I refuse to accept such a bleak picture. Instead, I think the real problem is [fill in blank—usually things like “science illiteracy,” “failure of scientists to admit uncertainty,” “bad science journalism,” “special interests distorting the truth”]

What’s wrong here?

Well, to start, there’s the self-imploding logical fallacy. It is a non sequitur to argue that because one doesn’t like the consequences of some empirical finding it must be wrong. And if what someone doesn’t like—and therefore insists “can’t be right”— is empirical research demonstrating the impact of a species of motivated reasoning, that just helps to prove the truth of exactly what such a person is denying.

Less amusingly and more disappointingly, the “too pessimistic, must be wrong“ fallacy suggests that the person responding this way is missing the bigger picture. In fact, he or she is missing two bigger pictures:

  • First, the “too pessimistic, so wrong” fallacy is looking only at half the empirical evidence: studies of cultural cognition show not only which communication strategies fail and why but also which ones avoid the identified mistake and thus work better.
  • Second, the “too pessimistic, so wrong” fallacy doesn’t recognize where cultural cognition fits into a larger debate about risk, rationality, and self-government. In fact, cultural cognition is an alternative—arguably the only psychologically realistic one—to an influential theory of risk perception that explicitly does assert the impossibility of reasoned democratic deliberation about the dangers we face and how to mitigate them.

I’m going to develop these points over the course of two posts.

  1. Cultural cognition theory doesn’t deny the possibility of reasoned engagement with evidence; it identifies how to remove a major impediment to it.

People have a stake in protecting the social status of their cultural groups and their own standing in them. As a result, they defensively resist—close their minds to consideration of—evidence of risk that is presented in a way that threatens their groups’ defining commitments.

But this process can be reversed. When information is presented in a way that affirms rather than threatens their group identities, people will engage open-mindedly with evidence that challenges their existing beliefs on issues associated with their cultural groups.

Not only have I and other cultural cognition researchers made this point (over & over; every time, in fact, we turn to normative implications of our work), we’ve presented empirical evidence to back it up.


Identity-affirmative & narrative framing. The basic idea here is that if you want someone to consider the evidence that there's a problem, show the person that there are solutions that resonate with his or her cultural values.

E.g., Individualists values markets, commerce, and private orderings. They are thus motivated to resist information about climate change because they perceive (unconsciously) that such information, if credited, will warrant restrictions on commerce and industry.

But individualists love technology. For example, they are among the tiny fraction of the US population that knows what nanotechnology is, and when they learn about it they instantly think it's benefits are high & risks low. (When egalitarian communitarians—who readily credit climate change science— learn about nanotechnology, in contrast,  they instantly think its risks outweigh benefits; they adopt the same posture toward it that they adopt toward nuclear power. An aside, but only someone looking at half the picture could conclude that any position on climate change correlates with being either “pro-“ or “anti-science” generally).

So one way to make individualists react more open-mindedly to climate change science is to make it clear to them that more technology—and not just restrictions on it-- are among the potential responses to climate change risks. In one study, e.g., we found that individualists are more likely to credit information of the sort that appeared in the first IPCC report when they are told that greater use of nuclear power is one way to reduce reliance on green-house gas-emitting carbon fuel sources.

More recently, in a study we conducted on both US & UK samples, we found that making people aware of geoengineering as a possible solution to climate change reduced cultural polarization over the validity of scientific evidence on the consequences of climate change. The individuals whose values disposed them to dismiss a study showing that CO2 emissions dissipate much more slowly than previously thought became more willing to credit it when they had been given information about geoengineering & not just emission controls as a solution.

These are identity-affirmation framing experiments. But the idea of narrative is at work in this too. Michael Jones has done research on use of "narrative framing" -- basically, embedding information in culturally congenial narratives -- as a way to ease culturally motivated defensive resistance to climate change science. Great stuff.

Well, one compelling individualist narrative features the use of human ingenuity to help offset environmental limits on growth, wealth production, markets & the like. Only dumb species crash when they hit the top of Malthus's curve; smart humans, history shows, shift the curve.

That's the cultural meaning of both nuclear power and geoengineering. The contribution they might make to mitigating climate change risks makes it possible to embed evidence that climate change is happening and is dangerous in a story that affirms rather than threatens individualists’ values. Hey—if you really want to get them to perk their ears up, how about some really cool nanotechnology geoengieneering?

Identity vouching. If you want to get people to give open-minded consideration to evidence that threatens their values, it also helps to find a communicator who they recognize shares their outlook on life.

For evidence, consider a study we did on HPV-vaccine risk perceptions. In it we found that individuals with competing values have opposing cultural predispositions on this issue. When such people are shown scientific information on HPV-vaccine risks and benefits, moreover, they tend to become even more polarized as a result of their biased assessments of it.

But we also found that when the information is attributed to debating experts, the position people take depends heavily on the fit between their own values and the ones they perceive the experts to have.

This dynamic can aggravate polarization when people are bombarded with images that reinforce the view that the position they are predisposed to accept is espoused by experts who share their identities and denied by ones who hold opposing ones (consider climate change).

But it can also mitigate polarization: when individuals see evidence they are predisposed to reject being presented by someone whose values they perceive they share, they listen attentively to that evidence and are more likely to form views that are in accord with it.

Look: people aren’t stupid. They know they can’t resolve difficult empirical issues (on climate change, on HPV-vaccine risks, on nuclear power, on gun control, etc.) on their own, so they do the smart thing: they seek out the views of experts whom they trust to help them figure out what the evidence is. But the experts they are most likely to trust, not surprisingly, are the ones who share their values.

What makes me feel bleak about the prospects of reason isn’t anything we find in our studies; it is how often risk communicators fail to recruit culturally diverse messengers when they are trying to communicate sound science.

I refuse to accept that they can’t do better!

Part 2 here.


Jones, M.D. & McBeth, M.K. A Narrative Policy Framework: Clear Enough to Be Wrong? Policy Studies Journal 38, 329-353 (2010).

Kahan, D. (2010). Fixing the Communications Failure. Nature, 463, 296-297.

Kahan, D. M., Braman, D., Cohen, G. L., Gastil, J., & Slovic, P. (2010). Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. L. & Human Behavior, 34, 501-16.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4, 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92, Jan. 9, 2012.

Sherman, D.K. & Cohen, G.L. Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science 11, 119-123 (2002).

Sherman, D.K. & Cohen, G.L. The psychology of self-defense: Self-affirmation theory. in Advances in Experimental Social Psychology, Vol. 38 (ed. Zanna, M.P.) 183-242 (2006).



Handbook of Risk Theory

Really really great anthology:

Roeser, S., Hillerbrand, R., Sandin, P. & Peterson, M. Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, (Springer London, Limited, 2012).

 Its edited by Sabine Roeser, who herself has done great work to integrate empirical study of emotion and risk with a sophisticated philosophical appreciation of their significance.  

Too bad the set costs so darn much! Guess Springer figures only university libraries will want to buy it (wrong!), but even they aren't made of cash!


Answer to Andy Revkin about Murray Gell-mann

Andy Revkin did a cool interview of Nobel Prize winning physicist Murray Gell-man, who thinks people are dumb b/c they don't get climate change.

Andy's post asks (in title): Can Better Communication of Climate Science Cut Climate Risks?

My response to Andy's question:

Answer is no & yes.

No, if "better communication of science" means simply improving how the content of sound scientific information is specified & transmitted.

Yes, if "better communication" means creating a deliberative environment that is protected from the culturally partisan cues that have poisoned the discussion of climate change.


1. the most science literate citizens in the U.S. are the most culturally divided on climate change; and

2. a dude who hasn't finished high  school is 50% likely to answer "yes" if asked whether antibiotics kill viruses (NSF science literacy questeion) but has no problem whatsoever figuring out that he should go to a Dr. when he has strep throat & take the pills that she prescribes for him.

People are really super amazingly good at figuring out who the experts are and following their advice. That skill doesn't depend on their having expert knowledge or having that knowledge "communicated" to them in terms that would allow them to get the science. But it can't work in a toxic communication environment.


 1. The climate change problem doesn't have anything to do with how scientists communicate. It has everything to do with how cultural elites talk about science.

2. It doesn't matter that Gell-man is innocent of the science of science communication. It is a mistake to think that that has anything to do with the problem. It would be nice if he understood the science of science communication in  the same way that it would nice for citizens to know the science behind antibiotics: it's intrinsically interesting but not essential to what they do-- as long as they follow the relevant experts' advice when they are sick, aren't doing quantum physics, etc.


 p.s. Can you please interview Freeman Dyson, too?


Cultural cognitive reality monitoring

My Yale colleague Marcia Johnson in psych dept has written some really cool papers on "cultural reality monitoring" (abstracts & links below). The basic idea is that institutions perform for members of a group a cognitive certification/validation role with respect to perceptions, belief, memories and like mental phenomena much akin to the certification/validation role that certain parts of the brain play for an individual. There's an element of analogy here, but also an element of identity: the cognitive processes that individuals use to "monitor reality" are in fact oriented by the functioning of the institutions.

There are a lot of parallels between Johnson's work and Mary Douglas's. But unlike Douglas (see How Institutions Think, in particular), Johnson is trying to cash out the idea of "what we see is who we are" with a set of individual-level psychological mechanisms, not a "functionalist" theory that sees collectives as agents.

By "psychologizing" cultural theory (here I'm scripting Johnson into a role that she doesn't explicitly present herself as filling; but I am pretty sure she wouldn't object!), Johnson does something very helpful for it: she supplies cultural theory with some creditable behavioral mechanisms, ones that hang together conceptually, have points of contact with a wide variety of (to some extent parallel, and to some extent competing) empirical programs in the social sciences, and are suggestive of and amenable to lots of meaningful empirical testing.

At the same time, by "culturizing" psychology, Johnson does something very useful for it: she furnishes it with a plausible (and again testable) account of the source of individual differences, one that explains how the single set of mechanisms known to psychology can generate systematic divergence between members of different social groups. (It's a lot more complicated, I'm afraid, than "slow" & "fast" ....)

Johnson's work thus helps to bridge Douglas's cultural theory of risk and Slovic's psychometric one, the two major theories of risk perception of the 20th & 21st centuries.

Johnson, M.K. Individual and Cultural Reality Monitoring. The ANNALS of the American Academy of Political and Social Science 560, 179-193 (1998)

What is the relationship between our perceptions, memories, knowledge, beliefs, and expectations, on one hand, and reality, on the other? Studies of individual cognition show that distortions may occur as a by-product of normal reality-monitoring processes. Characterizing the conditions that increase and decrease such distortions has implications for understanding, for example, the nature of autobiographical memory, the potential suggestibility of child and adult eyewitnesses, and recent controversies about the recovery of repressed memories. Confabulations and delusions associated with brain damage, along with data from neuroimaging studies, indicate that the frontal regions of the brain are critical in normal reality monitoring. The author argues that reality monitoring is fundamental not only to individual cognition but also to social/cultural cognition. Social/cultural reality monitoring depends on institutions, such as the press and the courts, that function as our cultural frontal lobes. Where does normal social/cultural error in reality monitoring end and social/cultural pathology begin?


Johnson, M.K. Reality monitoring and the media. Applied Cognitive Psychology 21, 981-993 (2007).

The study of reality monitoring is concerned with the factors and processes that influence the veridicality of memories and knowledge, and the reasonableness of beliefs. In thinking about the mass media and reality monitoring, there are intriguing and challenging issues at multiple levels of analysis. At the individual level, we can ask how the media influence individuals' memories, knowledge and beliefs, and what determines whether individuals are able to identify and mitigate or benefit from the media's effects. At the institutional level, we can ask about the factors that determine the veridicality of the information presented, for example, the institutional procedures and criteria used for assessing and controlling the quality of the products produced. At the inter-institutional level we can consider the role that the media play in monitoring the products and actions of other institutions (e.g. government) and, in turn, how other institutions monitor the media. Interaction across these levels is also important, for example, how does individuals' trust in, or cynicism about, the media's institutional reality monitoring mechanisms affect how individuals process the media and, in turn, how the media engages in intra- and inter-institutional reality monitoring. The media are interesting not only as an important source of individuals' cognitions and emotions, but for the key role the media play in a critical web of social/cultural reality monitoring mechanisms.



More on ideological symmetry of motivated reasoning (but is that really what's important?)

I have posted a couple times (here & here) on the "symmetry" question -- whether dynamics of motivated reasoning generate biased information processing uniformly (more or less) across cultural or ideological styles or are instead confined to one (conservativism or hierarchy-individualism), as proponents of the "asymmetry thesis" argue.

Chris Mooney has applied himself to the symmetry question with incredible intensity and has an important book coming out that marshals all the evidence he can find (on both sides) and concludes  that the asymmetry thesis is right. But Mooney now has concluded that he sees the latest CCP study on "geoengineering and the science communication environment" as evidence against his position (not a reason to abandon it, of course; that's now how science works-- one simply adds what one determines to be valid study findings to the appropriate side of the scale, which continues to weigh the competing considerations in perpetuity).

Mooney's assessment -- and his public announcement of it -- speak well of his own open-mindedness and ability to check the influence of his own ideological commitments on his assessments of evidence. But still, I think he has far less reason than he makes out to be disappointed by our results.

In our study, we tested the hypothesis that exposing subjects (from US & UK) to information on geoengineering would reduce cultural polarization over the validity of a climate change study (one that was in fact based on real studies published in Nature and PNAS).  

We predicted that polarization would be reduced among such subjects relative to ones exposed to a frame that emphasized stricter carbon-emission controls. Restricting emissions accentuates the conflicting cultural resonances of climate change, which gratify the egalitarian communitarian hostility to commerce & industry and threaten hierarchical individualist commitment to the same. Geoengineering, in contrast, offers a solution that affirms the latter's pro-technology sensibilities and thus mitigates defensive pressure on them to resist considering evidence that climate change is happening & is a serious risk.  

The experiment corroborated the hypothesis: in the geoengineering group, cultural polarization was significantly less than in the emission-control group.

The reason that Mooney sees this result as evidence against the "asymmetry" thesis is that assignment to the geoengineering condition in the experiment affected the views of both egalitarian communitarians and hierarchical individualists. The latter viewed the study as more valid than their counterparts and the latter less than their counterparts in the emission-control condition. In other words, there was less polarization because both groups moved toward the mean -- not because hierarchical individualists alone moderated their views.

Okay. I guess that's right. But for reasons stated in one of my earlier posts, I don't think that the study really adds much weight to either side of the scale being used to evaluate the symmetry question. 

As I explained, to test the asymmetry thesis, studies need to be carefully designed to reflect the various competing theories that give us reason to expect either symmetry or asymmetry in motivated reasoning.  Those sorts of studies (if the studies are designed properly) will yield evidence that is unambiguously consistent with one inference (symmetry) or the other (asymmetry).

Our study wasn't designed to do that; it was designed to test a theory that predicted that appropriately crafting the cultural meaning conveyed by sound science could mitigate cultural polarization over it. The study generated evidence in support of that theory. But because the design didn't reflect competing predictions about how the effect of the experimental treatment would be distributed across the range of our culture measures, the way that the effect happened to be distributed (more or less uniformly) doesn't rule out the possibility that there really is an important asymmetry in motivated reasoning.

I think the same is true, moreover, for the vast majority of studies on ideology and motivated reasoning (maybe all; but Mooney, who has done an exhaustive survey, no doubt knows better than I if this is so): their designs aren’t really geared to generating results that would unambiguously support only one inference in the asymmetry debate.

In the case of our (CCP) studies, at least, there's a reason for this: we don't really see "who is more biased" to be the point of studying these processes. 

Rather, the point is to understand why democratic deliberations over policy-relevant science sometimes (not always!) generate cultural division and what can be done to mitigate this state of affairs, which is clearly inimical, in itself, to the interest of a democratic society in making the best use it can of the best available evidence on how to promote its citizens' wellbeing.

That was the point of the geoengineering study. What it showed -- much more clearly than anything that bears on the ideological symmetry of motivated reasoning -- is that there are ways to improve the quality of the science communication environment so that citizens of diverse values are less likely to end up impelled in opposing directions when they consider common evidence.

For reasons I have stated, I am in fact skeptical about the asymmetry thesis. Of course, I'm open to whatever the evidence might show, and am eager in particular to consider carefully the case Mooney makes in his forthcoming book.

But at the end of the day, I myself am much more interested in the question of how to improve the quality of science communication in democracy.  When there is evidence that appears to speak to that question, then I think it is more important to figure out exactly what answer it is giving, and how much weight we should afford it, than to try to figure out what it might have to say about "who is more biased."



New CCP geoengineering study

New study/paper, hot off the press:


Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment

We conducted a two-nation study (United States, n = 1500; England, n = 1500) to test a novel theory of science communication. The cultural cognition thesis posits that individuals make extensive reliance on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests the potential value of a distinctive two-channel science communication strategy that combines information content (“Channel 1”) with cultural meanings (“Channel 2”) selected to promote open-minded assessment of information across diverse communities. In the study, scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. Consistent with the study hypotheses, we found that making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions helps to offset cultural polarization over the validity of climate-change science. We also tested the hypothesis, derived from competing models of science communication, that exposure to information on geoengineering would provoke discounting of climate-change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.


much scarier than nanotechnology

someone should warn people -- maybe with a contest for an appropriate X-free zone logo.



question on feedback between cultural affinity & credibility

John Timmer writes:

Greetings -
I've read a number of your papers regarding how people's cultural biases influence their perception of expertise.  I was wondering if you were aware of any research on the converse of this process – where people read material from a single expert and, in the absence of any further evidence, infer their cultural affinities. I'm intrigued by the prospect of a self-reinforcing cycle, where readers infer cultural affinity based on objective information (i.e., acceptance of the science of climate change), and then interpret further writing through that inferred affinity.
Any information or thoughts you could provide on this topic would be appreciated.

Am hoping others might have better answer than me-- if so, please post them! -- but here is what I said:

Hi, John. Interesting. Don't know of any.

Some conjectures:
a. I would die of shock if there weren't a good number of studies out there, particularly in political science, looking at how position-taking creates a kind of credibility aura or spillover or persuasiveness capital etc -- & how about how durable it is.
b. There is probably some stuff out there on how citizens simultaneously update their beliefs when they get expert opinions & update their views on experts' knowledge & credibility as they get information from those experts that contradicts their beliefs. Pretty tricky to figure out the right way to do that even from a "rational decisionmaking" point of view! 
I wish I could say, oh, "read this this & this" -- but I haven't seen these things specifically or if I have I didn't make note of them. But there's so much stuff on confirmation bias, bayesian updating, & source credibility that it is just inconceivable that these issues haven't been looked at. If I see something (likely now I'll take note), I'll let you know.
c.  There's lots of stuff on in-group affinities & credibility & persuasion. Our stuff is like that. But I *doubt* that the interaction of  this w/ a & b  -- & the contribution of this feedback effect in generating conflict over things like societal risks has been examined. That's exactly what your interested in, of course. But I'd start w/ a&b & see what I found!




Industrial strength risk perception measure

In my last post, I presented some data that displayed how public perceptions of risk vary across putative hazards and how perceptions of each of those risks varies between cultural subgroups.  

 The risk perceptions were measured by asking respondents to indicate on “a scale of 0-10 with 0 being ‘no risk at all’ and 10 meaning ‘extreme risk,’ how much risk [you] would ... say XXX poses to human health, safety, or prosperity.”

I call this the “Industrial Strength Measure” (ISM) of risk. We use it quite frequently in our studies, and quite frequently people ask me (in talks, in particular) to explain the validity of ISM — a perfectly good question given the generality of ISM.

The nub of the answer is that there is very good reason to expect subjects’ responses to this item to correlate very highly with just about any more specific question one might pose to members of the public about a particular risk.

The inset to the right, e.g., shows that responses to ISM as applied to  “climate change” correlates between 0.75 & 0.87 with responses (of participants in the survey featured in the last post) to more specific items that relate to beliefs about whether global temperatures are increasing, whether human activity is responsible for any such temperature rise, and whether there will be “bad consequences for human beings” if “steps are not taken to counteract” global warming. (The  ISM is "GWRISK" in the correllation matrix.) 

As reflected in the inset, too, the items as a group can be aggregated into a very reliable scale (one that has a “Cronbach’s alpha” of 0.95 — the highest score is 1.0, and usually over 0.70 is considered “good”).

That means, psychometrically, that the responses of the subjects can be viewed as indicators of a single disposition —here to credit or discredit climate change risks. One is warranted in treating the individual items as alternative indirect measures of that disposition, which itself is "latent" or unobserved.

None is a perfect measure of that disposition; they are all "noisy"--all subject to some imprecision that is essentially random.  

But when one combines such items into a composite scale, one necessarily gets a more discerning measure of the unobserved or latent variable. What they are measuring in common gets summed (essentially), and their random noise cancels out!

What goes for climate change, moreover, tends to go for all manner of risk. At the end of the post is a short annotated bibliography of articles showing that ISM correlates with more specific indicators that can be combined into valid scales for measuring particular risk perceptions.

There are two upshots of this, one theoretical and the other practical.

The theoretical upshot is that one should be wary of treating various items that have the same basic relation or valence toward a risk as being meaningfully different from each other . Risk items like these are all picking up on  a general disposition--an affective “yay” or “boo” almost. If you try to draw inferences based on subtle differences in the responses people are giving to differently worded items that reflect the same pro- or con- attitude, you are likely just parsing noise.

The second, practical upshot is that one can pretty much rely on any member of a composite scale as one's measure of a risk perception. All the members of such a scale are measuring the “same thing.” 

No one of them will measure it as well as a composite scale. So if you can, ask a bunch of related questions and aggregate the responses.

But if you can’t do that — because say, you don’t have the space in your survey or study to do it— then you can go ahead and use the ISM, e.g., which tends to be a very well behaved member of any reliable scale of this sort.

ISM isn't as discerning as a reliable composite scale, one that combines multiple items. It will be noisier than you’d like. But it is valid -- a true reflection of the the latent risk disposition-- and unbiased (will vary in the same direction as the full scale would).

A related point is that about the only thing one can meaningfully do with either a composite scale or a single measure like ISM  is assess variance.

The actual responses to such item don't have much meaning in themselves; it's goofy to get caught up on why the mean on ISM is 5.7 rather than 7.1, or whether people "strongly agree" or only "slightly agree" that the earth is heating up etc.

But one can examine patterns in the responses that different groups of people give, and in that way test hypotheses or otherwise learn something about how the latent attitude toward the risk or risks in question is being shaped by social influences.

That is, regardless of the mean on ISM, if egalitarian communitarians are 1 standard deviation above & hierarchical individualists 1 standard deviation below that mean, then you can be confident people like that really differ with respect to the latent disposition the ISM is measuring toward climate change risks.

That’s what I did with the data in my last post: I used ISM to look at variance across risks for the general public, and variance between cultural groups with respect to those same risks. 

See how much fun this can be?!


  • Dohmen, T., et al. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011). Finds that a "general risk question" (the industrial grade 0-10) reliably predicts more specific risk appraisals, & behavior, in a variety of domains & is a valid & economical way to test for individual differences.
  • Ganzach, Y., Ellis, S., Pazy, A. & Tali. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008). Finding that the "single item measure of risk perception" as used in risk perception literature (the industrial grade "how risky" Likert item) better captures perceived risk of financial prospects & links finding to Slovic et al.'s "affect heuristic" in risk perception studies.
  • Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004). Reports various study findings that support the conclusion that members of the public tend to conform more specific beliefs about putative risk sources to a global affective appraisal.
  • Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002). Reporting findings that validate industrial grade measure ("how risky you perceive each situation" on 5-pt "Not at all" to "Extremely risky" Likert item) for health/safety risks & finding that it predicts both perceived benefit & risk-taking behavior with respect to particular putative risks; also links finding to Slovic et al.'s "affect heuristic."

U.S. risk-perception/polarization snapshot

The graphic below & to the right (click for bigger view) reports perceptions of risk as measured in a U.S. general population survey last summer.  The panel on the left reports sample-wide means; the one on the right, means by subpopulation identified by its cultural worldview. 

By comparing, one can see how culturally polarized the U.S. population is (or isn’t) on various risks ranked (in descending order) in terms of their population-wide level of importance.

Some things to note:

  • Climate change (GWRISK) and private hand gun possession (GUNRISK) seem relatively low in overall importance but are highly polarized. This helps to illustrate that the political controversy surrounding a risk issue is determined much more by the latter than by the former.
  • Emerging technologies: Synthetic biology (SYNBIO) and nanotechnology (NANO) are relatively low in importance and, even more critically, free of cultural polarization. This means they are pretty inert, conflict-wise. For now.
  • Vaccines, schmaccines. Childhood vaccination risk (VACCINES) is lowest in perceived importance and has essentially zero cultural variance. This issue gets a lot of media hype in relation to its seeming importance.
  • Holy s*** on distribution of illegal drugs (DRUGS)! Scarier than terrorism (!) and not even that polarized. (This nation won’t elect Ron Paul President.)
  • Look at speech advocating racial violence (HATESPEECH). Huh!
  • Marijuana distribution (MARYJRISK) and teen pregnancies (TEENPREG) feature hierarch-communitarian vs. egalitarian-individualist conflict. Not surprising.

Coming soon: cross-cultural cultural cognition! A comparison of US & UK.


Sood & Darley's "plasticity of harm"

Last semester I taught a seminar at Harvard Law School on “law and cognition.”  Readings consisted of about 50 or so papers, most of which featured empirical studies of legal decisionmaking.  I will now & again describe some of them. 

One of the most interesting was “The Plasticity of Harm in the Service of Punishment Goals: Legal Implications of Outcome-Driven Reasoning, ” 100 Calif. L. Rev. (forthcoming 2012), by Avani Sood—whom I convinced to attend the seminar session in which we discussed it—and John Darley (a legendary social psychologist who now does a lot of empirical legal studies).

The paper contains a set of experiments in which subjects are shown to impute “harm” more readily to behavior when it offends their moral values than when it doesn’t.  This dynamic, which reflects a form of motivated reasoning, subverts legal doctrines rooted in the liberal “harm principle”—which prohibits punishment of behavior that people find offensive but that doesn’t cause harm.

 I liked this paper a lot the first time I read it—as an early draft presented at the 2010 Conference on Empirical Legal Studies—but was all the more impressed this time by a new study S&D had added. In that study, S&D examined whether subjects’ perceptions of harm were sensitive to the message of a  political protestor who was alleged to have “harmed” bystanders by demonstrating in the nude.

S&D first conducted a “between subjects” version of the design in which one half the subjects were told that the protestor was expressing an “anti-abortion” message and the other half that the protestor was expressing a “pro-abortion” one. S&D found that subjects more readily perceived harm, and favored a more severe sanction, when the protestor’s message defied the subjects’ own positions on abortion.

That was in itself a nice result (it extended other studies in the paper by showing that diverse moral or ideological attitudes could generate systematic disagreements in perceptions of harm) but the best part was a follow-up, within-subject version of the same design, in which all subjects assessed both pro- and anti-abortion protestors. Subjects now rated the behavior of both protestores—the one whose message matched their own positoin and the one whose message didn’t—equally harmful, and deserving of equally severe punishments.

The result was valuable for S&D because it addressed a potential objection to the paper: that subjects in their various studies didn’t understand that offense to their (or others’) moral sensibilities doesn’t count as a “harm” for purposes of the law. If that had been so, then the results in the within-subject design presumably would have reflected the same correspondence between protestor message and subject ideology as the results in the between-subjects design. The difference suggested that the subjects who had evaluated only one protestor at a time had been unconsciously influenced by their own ideology to see harm conditional on their opposition to the protestor’s message.

This result in fact made me feel better about some of the cultural cognition studies that I and my collaborators have done. In a number of papers, we have been exploring the phenomenon of “cognitive illiberalism,” which for us refers exactly to the vulnerability of citizens to a form of motivated reasoning that subverts their commitment to liberal principles of neutrality in the law.

One of the possible objections to our studies was that we were assuming such a commitment—when in fact our subjects could have been consciously indulging partisan sensibilities in assessing “facts” like whether a fleeing driver had exposed pedestrians’ to a “substantial risk of death” or a political demonstrator had “shoved” or “blocked” onlookers. I think we had reason to discount this possibility before. But based on S&D’s result, we now have a lot more!

I also really like the S&D result because of what it suggests about the prospects & even the mechanics of “debiasing” in this setting.  The disparity between their between- and with-subject designs demonstrated not only that their subjects’ conscious commitment to liberal principles were being betrayed by the sensitivity of their perceptions to their ideologies. It suggested, too, that making their subjects conscious of the risk of to this sort of defeat could equip them to overcome it. 

One might be tempted to think that all one has to do is tell citizens to “consider the opposite” if one wants to counteract culturally or ideologically motivated reasoning.  Sadly, I don’t think things are that simple, at least outside the lab. But that’s a story for another time.



Cultural vs. ideological cognition, part 3

This is last of 3 posts addressing the question  “Why cultural worldviews rather than liberal-conservative or Democrat-Republican?” in our studies of risk perception & science communication.

The first & second posts identified the explanatory, predictive, and prescriptive advantages of using the two-dimensional culture measures instead of a one-dimensional left-right one.

Part 3: The measurement conception of dispositional constructs

This post backs off the “culture dominates ideology” trope — one that could be read into the last two posts but that I actually strongly disavow.

Indeed, my third point—which is actually the most important—is that the question “why cultural worldviews & not left-right?” often is ill-posed. The motivation for it, it often turns out, is if not a mistaken than at least an unappealing (to me) understanding of  the point of identifying dispositional sources of conflict over societal risk.

I’ll call the position I have in mind the “metaphysical” conception of cognitive dispositions. I’ll contrast it with another understanding—the one I endorse—that I’ll call the “measurement” conception.

From the point of view of the metaphysical conception, systems of ideas like “liberalism” and “conservativism,” “individualism” and “collectivism,” and even more elaborate constructs are thought to be actual, worldly entities. They are things that are really out there—like trees and lampposts and atoms (in fact, it is a related mistake to think of atoms as worldly phenomena).

Not all of them, but certain of them. Indeed, the primary goal of studying the contribution that these systems  make to cognition of politically consequential facts is to identify the “real” one or ones and to expose the nonexistence (or at least inconsequence) of the others. One does this by constructing empirical study designs (or more likely by multivariate statistical tests) that are asserted to “show” that only the “real” one or ones “really” “explain” the relevant state of affairs—or in any case explain “more” of it than does any competing dispositional entity.

The measurement conception sees ideological and cultural constructs as merely tools. Their mission in the scholarly study of perceptions of risk and like facts isn’t to enable demonstration of what “entity” is “really” causing them. Rather, it is to equip us for making sense of what we already know, albeit imprecisely, and even more important for enhancing our ability to manage and control the state of affairs we live in.

We already know the broad outlines of conflict over risk and related facts. It is plain to any socially competent observer that groups whose members display opposing outlooks or styles disagree, often intensely, over diverse packages of risk claims—ones relating to what sorts of behavior or other contingencies threaten society. But we don’t understand this phenomenon well enough to be able to explain, predict, and most importantly of all manage how it effects our collective lives.

The measurement conception says that the key to acquiring that sort of insight isn’t to identify (much less argue about) what “really” causes that sort of conflict but rather to perfect our ability to measure the dispositions associated with the competing sets of risk perceptions with which we are familiar.  With reliable and valid measures in hand, we can satisfy (or at least make go about trying to, in the only way that has a chance to succeed) our interests in explanation, prediction, and prescription through appropriately designed scientific tests.

The methods of latent variable modeling are the ones best suited for fashioning such measures.  Simply put, these methods aim to enable indirect measurement of some unobservable, or at least unobserved thing on the basis of observable, directly measurable “indicators” or correlates of it. They include the various techniques that psychologists and other social scientists use for measuring diverse sorts of aptitudes and propensities (including attitudes and cognitive styles) that are hypothesized to be the sources of individual differences in one or another behavior, ability, belief, or whathaveyou.

In the study of the cultural cognition of risk (at least as I understand it), the items that make up the “hierarchy-egalitarian” and “individualism-collectivism” scales are nothing more than latent-variable indicators. The responses that study subjects give to them generate patterns, which can then be assessed to confirm that are indeed measuring some unobserved common disposition in those people, and to assess how discerningly they are measuring it.

We hypothesize—and then try corroborate or disprove through empirical studies—that variance in the latent disposition measured in this way generate the distinctive (and very peculiar!) patterns of risk perception that animate debates over issues as seemingly unrelated (at least in any causal sense) as the reality and sources of climate change, the impact of gun control on crime rates, the risks and benefits of the HPV vaccine, etc.

 But on this account, our cultural “worldviews” are merely indicators of this latent dispositional propensity. They are not themselves the “thing” that causes conflict over risk or anything else. Nor are they exclusive of other possible measures of the propensities that do.

Indeed, ideologies like “liberalism” and “conservativism” are also indicators of those very propensities.  We all know already that they are—we can see that just by looking around us. We can also see that many other characteristics—region of residence and religious affiliation, for example—are also bound up with the outlooks and styles that animate these conflicts.

Indeed, it might well be feasible to combine diverse indicators such as these with each other and with our cultural worldview scales, and thereby generate an even more discerning measure of the latent dispositions or propensities at work in risk conflicts. (It is, in fact, statistically mindless to identify their “independent” influence through multivariate regression, since the covariance that such models “partial out” is exactly what one wants to exploit if one has reason to think they are common indicators of a latent variable.)  We have done some work like this, including studies that show how characteristics like gender and race interact with cultural worldviews and others (here & here, e.g.) that try to simulate how collections of attributes treated as cultural profiles or “styles” can influence perceptions.

Indeed, the only justification for preferring a measurement strategy that makes use of fewer rather than more types of indicators is that doing so is, at least for some purpose, more efficient or useful. These are the usual points made in favor of “parsimony,” although stripped of any dogmatic preference for simplicity; the goal is to find the optimal tradeoff between methodological tractability, measurement precision, and ultimately explanatory, descriptive, and prescriptive power.

And it is only on the basis that I would justify the use of our culture measures over of “left-right” ideology ones. That is how my previous two posts should be understood.  I emphatically disavow any intention to defend “culture” over “ideology” in the way that is envisioned by the metaphysical conception of cognitive dispositions.

Indeed, the decisive appeal of the measurement conception, for me, is that it avoids all the baggage of a metaphysical style of engagement with social phenomena.

part 1

part 2


Cultural vs. ideological cognition, part 2

This is part 2 of the (or an) answer to the question: “Why cultural worldviews rather than liberal-conservative or Democrat-Republican?” in our studies of risk perception & science communication.

In the last post, I connected our work to Aaron Wildavsky’s surmise that Mary Douglas’s two-dimensional worldview scheme would explain more mass beliefs more coherently than any one-dimensional right-left measure. (BTW, I don’t think our work has “proven” Wildavsky was “right”; in fact, I think that way of talking reflects a mistaken, or in any case an unappealing understanding of the point of identifying the sources of public contestation over risk, something I’ll address in the final installment of this series of posts.)

Part 2: Motivated system 2 reasoning

I ended that post with the observation that the cultural cognition worldview scales tend to do a better job in explaining conflict among individuals who are low in political sophistication. In this post, I want to suggest that cultural worldviews are also likely to shed more light on conflict among individuals who are high in technical-reasoning proficiency—or what Kahneman refers to as “system 2” reasoning.

In Kahneman’s version of the dual process theory,  “System 2” is the label for deliberate, methodical, , algorithmic types of thinking, and “System 1” the label for largely rapid, unconscious, heuristic-driven  types. (Before Kahneman, a prominent view in social psychology called these “systematic” and “heuristic” processing, respectively.)  Kahneman implies that cognitive biases are associated with system 1, and are constrained by system 2—or not, depending on how disposed and able people are to think in a rigorous, analytical manner.

Our work (consistent with—indeed, guided and informed by—the earlier dual process work) suggests otherwise.  We have examined how cultural cognition interacts with numeracy, a form of technical reasoning associated with system 2. What we have found (so far; work is ongoing) is that individuals who are high in numeracy are more culturally polarized than those who are low in numeracy. 

To us, this shows that those who are more adept at System 2 reasoning have a unique ability— if not a unique disposition—to search out and construe technical information in biased patterns that are congenial to their values. In effect, this is “motivated system 2 reasoning.” It is as much a form of “bias” as any mechanism of cultural cognition that operates through system 1 processes (although whether it makes sense to think of either system 1 or system 2 mechanisms of cultural cognition as “biases” is itself a complicated matter that depends on what we understand people to be trying to maximize and on how we ourselves feel about that).

It’s not clear to me that political-party identity or liberal-conservative ideology can account for motivated system 2 reasoning. Indeed, as I discussed in connection with John Bullock’s interesting work, the juxtaposition of partisan identity with measures of reasoning style like “need for cognition” seems to produce results that are simply unclear (although intriguingly so).

“Need for cognition” & other quality-of-reasoning measures that rely on self-reporting might be less helpful here than ones that rely on objective or performance-based assessments. Numeracy is one of those.

Another is Frederick’s Cognitive Reflection Test (CRT), which is quickly coming to be recognized as the best indicator of system-2 disposition & ability.

In some new analyses of data collected by the Cultural Cognition Project, I looked at how CRT measures (a subcomponent of our numeracy scale) relates to the cultural worldview measures.  I found that Hierarchy and Individualism were both correlated with CRT— but that they had opposite signs— positive in the case of Hierarchy, negative in the case of Individualism.

I also found that a scale that reliably combined Republican party affiliation/conservative ideology (α = 0.75) was correlated with CRT in the positive direction.  This is probably not the association one would expect, btw, if one subscribes to the “asymmetry” thesis, which sees political conflict over risk and related facts as linked to reasoning deficiencies unique to conservative thought.

And the package of correlations doesn’t bode well for any one-dimension left-right measure as a foundation for explaining risk perception & science communication.  For if System 2 reasoning does have special significance for the sort of conflict that we see over climate change, nuclear power, etc., then a one-dimensional measure that merges Hierarchy & Individualism into a generic “conservativism” will be insensitive to the potentially divergent relationships these dispositions have with the system 2 reasoning style.

Enough! (for now anyway)

part 1

part 3


Cultural vs. ideological cognition, part 1

In our study of cultural cognition, we use a two-dimensional scheme to measure the group values that we hypothesize influence individuals’ perceptions of risk and related facts. The dimensions, Hiearchy-Egalitarianism (“Hiearchy”) and Individualism-Communitarianism (“Individualism”), are patterned on a framework associated with the “cultural theory of risk” associated with the work of Mary Douglas and Aaron Wildavsky. Because they are cross-cutting or orthogonal, they can be viewed as defining four cultural worldview quadrants: Hierarchy-individualism (HI); Hierarchy-communitarianism (HC); Egalitarian-individualism (EI); and Egalitarian-communitarianism (EC). 

Often we are asked why we don’t just use the more familiar political measures like “liberal-conservative” ideology or Democratic-Republican party affiliation.  I am going to give a three-part answer to this question in a sequence (likely continuous) of posts.

Part 1: Two dimensions dominate one

We started this project as an effort to cash out the cultural theory of risk, so not surprisingly the first part of the answer is just an elaboration of the argument that Aaron Wildvasky made for using Douglas’s scheme rather than liberal-conservative ideology as a measure of individual differences in political psychology. Wildavsky conjectured that Douglas’s two dimensions would explain more controversies, more coherently, than a one dimensional left-right measure. 

Our work and that of others seems to bear that out. It’s true that Hierarchy and Individualism are both modestly correlated (in the vicinity of 0.4 for the former and 0.25 for the latter) with political conservatism. But the cross-cutting Hierarchy and Individualism dimensions can often capture divisions of belief that evade the simple one-dimensional spectrum of liberal-conservative ideology (or of Republican-Democrat party identity), particularly where conflicts pit the EI quadrant against the HC one:

  • In one study, e.g., we found that the cultural worldviews, but not liberal-conservative ideology or political party, predicted disagreement over facts relating to the costs and benefits of “outpatient commitment laws,” which mandate mentally ill persons submit to psychiatric restatement, including anti-psychotic medication, as a condition of avoiding involuntary commitment.     
  • We’ve also found that the HC-EI division better explains divisions of opinion, particularly among women, on abortion-procedure health risks.         
  • In an experimental study of perceptions of a videotaped political protest, we also found that the cultural worldview measures painted a more discerning and dramatic picture of group disagreements than did paty affiliation or ideology.·        

In addition, the explanatory power of political party affiliation and ideology tends to be very sensitive to individuals’ level of political knowledge or sophistical. They work fine for those who are high in knowledge or sophistication (as political scientists measure it) but not for those are moderate or low.

Wildavsky was aware of this and surmised that the culture measures would do a better job, because cultural cues are more readily accessible to the mass of ordinary citizens than are argumentative inferences drawn from the abstracts concepts that pervade ideological theories.

Our work seems to bear out this part of Wildavsky’s argument, too. The culture measures, we have found, explain divisions even among individuals who are relatively low in sophistication when ideology and party can’t.

The goal is to generate a reasonably tractable scheme that explains and predicts risk (and related facts), and generates policy prescriptions and other interventions that improve people’s ability to make sense of risk. A one-dimensional scheme — like liberal-conservative ideology — is very tractable, very parsimonious, but we agree with Wildavsky that the greater explanatory, predictive, and prescriptive power associated with a two-dimensional cultural scheme is well worth the manageable level of complexity that it introduces.

part 2

part 3


Do experts use cultural cognition?

In our studies, we examine how ordinary persons -- that is, non-experts -- form perceptions of risk & related facts. But I get asked all the time whether I think the same dynamics affect how experts form their perceptions. I dunno -- we haven't studied that.

But of course I have conjectures.

BTW, "conjecture" is a great word when used in the manner Popper had in mind:  to describe a position for which one doesn't have the sort of direct evidence one would like and could get from a properly designed study, but which one believes in provisionally on the basis of evidence that supports related matters & subject to even better proof of a direct or indirect kind. Of course, every belief should be provisional & subject to more & better proof. But it organizes one's own thoughts & attention to be able to separate the beliefs one feels really do need to be shored up from ones that seem sufficiently grounded that one needn't spend lots of time on them. Also, if people know which of their beliefs to regard as conjectures & habituate themselves to acknowledge the status of them in discussion with others who do the same, then they all can all speak more freely and expensively,  in ways that might help them (maybe by creating excitment or motivation) to obtain better evidence, & without worry that they will mislead or confuse one another.

So -- is expert decisionmaking subject to cultural cognition? 

Yes. And No.

Yes, because to start, experts use processes akin to cultural cognition to reason about the matters on which they are experts. Those processes reflect sensitivity to cues that individuals use to orient themselves within groups they depend on for access to reliable information; they are built into the capacity to figure out whom to trust about what.  

What is different about experts and lay people in this regard -- what makes the former experts  -- is only the domain-specificity of the sensibilities that the expert has acquired in his or her area of expertise, which allow the expert to form an even more reliable apprehension of the content of shared knowledge within his or her group of experts.

The basis of this conjecture is an account of how professionalization works -- as a process that endows practitioners with bridges of meaning across which they transmit shared prototypes to one another that help them to recognize what is true, appropriate & so forth. My favorite account of this is Margolis's in Patterns, Thinking, and Cognition. Llewellyn called this kind of professional insight as enjoyed by lawers & judges "situation sense."  

Maybe, then, we should think of this a kind of professional cultural cognition. Obviously, when experts use it,  they are not likely to make mistakes or to fall into conflict. On the contrary, it is by virtue of being able to use this professional cultural cognition -- professional habits of mind, in Margolis's words --that they are able reliably to converge on expert understanding.

Now a bit of No: Experts when they are making expert judgments in this way are not using cultural cognition of the sort that nonexpert lay people are using in our studies. Cultural cognition in this sense is a recognition capacity -- made up of prototypes and bridges of meaning -- that ordinary people who share a way of life use to access and transmit common knowledge. One of things they use it for is to apprehend the state of expert knowledge in one or another domain; lay people have to use their "cultural situation sense" for that precisely b/c they don't have the experts' professional cultural cognition.

Still, laypersons' cultural situation sense doesn't usually lead to error or conflict either. Ordinary people are experts at figuring out who the experts are and what it is that they know; if ordinary people weren't good at that, they would lead miserable lives, as would the experts.

When lay people do end up in persistent disagreement with experts, though, the reason might well be incommensurabilities in their respective systems of cultural cognition. In that case, the two of them -- experts and lay people -- both lack access to the common bridges of meaning that would allow what experts or professionals see w/ their prototypes to assume a form recognizable in the public's eye as a marker of expert insight. This is another Margolis-based conjecture, one I take from his classic Dealing with Risk: Why the Public and Experts Disagree on Environmental Issues.

Lay people can also fall into conflict as a result of cultural cognition. This happens when the diverse groups that are the sources of  cultural cognition assign antagonistic meanings (or prototypes) to matters that admit of expert investigation. When that happens, the sensibilities that ordinarily enable lay people  to know whom to trust about what become unreliable; the signals they pick up who the experts are & what they know are being masked and distorted by a sort of interference.  This sort of problem is the main thing that I understand our studies of cultural cognition to be about.  

More generally, the science of science communication, of which the study of cultural cognition is just one part, refers to the self-conscious development of the specialized habits of mind -- shared prototypes and bridges of meaning-- that will enable expert knowledge of  lay-person/expert misunderstandings & public conflicts over expert knowledge. The kind of professional cultural cognition we want here will allow those who acquire it not only to understand why these pathologies occur, but also to identify what steps should be taken to treat them, and better yet prevent them from happening in the first place. 

Now some more Yes -- yes scientists do use cultural cognition of the same sort as lay people.

They obviously use it in all the domains in which they aren't experts.  What else could they possibly do in those situations? They might not appreciate that they are figuring out what's true by tuning in to the beliefs of those who share their values. Not only is that invisible to most of us but it is especially likely to evade the notice of those who are intimately familiar with the contribution that their distinctive professional habits of mind make to their powers of understanding in their own domain.

We should thus expect experts -- scientists and other professionals -- to be subject to error and conflict in the same way, to the same extent that lay people are when they use cultural cognition to participate in knowledge (including scientific knowledge) about which they are not themselves experts.  

The work of Rachlinski, Wistrich & Gutherie, e.g., suggests this: they find that judges show admirable resistance to familiar cognitive errors, but only when they are doing tasks that are akin to judging, which is to say, only when they are using their domain-specific situation sense for what it is meant for.

But Rachlinski, Wistrich & Gutherie also have shown that judges can be expected systematically to err in judging tasks, too, when something in their decisionmaking environment distorts or turns off their professional habits of mind.  

So on that basis, I would conjecture that experts -- scientific & professional ones -- will sometimes err, and likely fall into conflict, in making judgments in their own domains when some influence interefers with their professional cultural cognition, & they lapse, no doubt unconsciously, into reliance on their nonexpert cultural cognition.

In that situation, too, we might see experts divided on cultural lines & about matters in their own fields. This is how I would explain work by Slovic & some of his collaborators (discussed, e.g., here) & by Silva & some of hers (e.g., here & here), on the power of differing worldviews and realted values to explain some forms of expert disagreement. But it is notable that they always find that culture explains much less conflict among experts on matters on which they are experts than they & others have found in cases in which there is persistent public disagreement about policy-relevant science.

So these are my conjectures. Am open to others'. And am especially interested in evidence.



The Ideological Symmetry of Motivated Reasoning

On the heels of  the John Bullock article & his amplification of it below,  the ideological neutrality of motivated reasoning came up again in an informative exchange with Howie Lavine during my recent presentation at the University of Minnesota. So I've found myself continuing to ponder the matter.

In our work, we test the hypothesis that cultural cognition -- a species of motivated reasoning that reflects the impact of group values on perceptions of fact -- is responsible for conflicts over scientific evidence on issues like climate change, the HPV vaccine, & gun control (and for conflicts over non-scientific evidence on many legal issues, too). The hypotheses assume that those on both sides of such debates are being affected by cultural cognition, and our data seem to reflect that.

But at least some social scientists have been advancing the claim that motivated reasoning in politics is more characteristic of (or maybe even unique to) conservative ideology. Essentially, these researchers are reviving the "authoritarian personality" position associated with Adorno. The most prominent of these neo-Adorno-ists is John Jost (see herehere & here, e.g.). 

I tend to doubt that motivated reasoning is ideologically lopsided. What's more, I tend to believe that even if the effects are not perfectly uniform across the ideological continuum (or cultural continua; we use two dimensions of value in our work as opposed to the single "liberal-conservative" one that Jost and others use), the impact of motivated reasoning is more than large enough at both ends to be a concern for all.

But I acknowledge the issue of "motivated reasoning asymmetry" is an open one, and agree it is worth investigating.

Obviously, the investigation should consist in empirical testing. But there must also be attention to theory, which is necessary to tell us what we sort of evidence is relevant, and hence how tests should be constructed and interpreted.

To that end, I offer some thoughts on a couple of the theories that might result in contrary predictions on the asymmetry thesis & what they suggest about empirical testing of that claim.

As I read Jost and others, the asymmetry position grounds motivated reasoning in a general propensity (a personality trait, essentially) toward dogmatism that tends toward a conservative (or "authoritarian") political orientation. On this account, we shouldn't expect to see motivated reasoning among liberals, whose ideology is itself a reflection of their propensity toward open-mindedness.

In contrast, the symmetry position (as reflected in cultural cognition and related theories) sees ideologically motivated reasoning as simply one species of identity-protective cognition. As developed by Sherman & Cohen, identity-protective cognition refers to the dismissive reaction that individuals form toward information that threatens the status of (or their connection to) a group that is important to their identity.  "Democrat" and "Republican" (along with hierarchy and egalitarianism, communitarianism and individualism, in cultural cognition) are both group affinities of that sort, and so both create vulnerability to motivated cognition.

Simple correlations of the extent of motivated reasoning with partisan identity or ideology (or cultural worldviews) furnish the most obvious way to test the asymmetry thesis but are unlikely to be conclusive because of their modest magnitudes and their variability across studies (such asymmetries in lab studies will also raise toughter-than-usual external validity questions). One nice thing about specifying the  theories in this way, we can expand the search for evidence that gives us more or less reason to accept or reject the asymmetry thesis. 

E.g., if personal self-affirmation works to reduce resistance to ideologically noncongruent information among both liberals & conservatives, Republicans & Democrats--that, in my mind, counts as reason to be skeptical of asymmetry. The effect of self-affirmation is evidence that the source of the motivated reasoning at work is identity-protective cognition; there's no reason to expect self-affirmation to have any effect in mitigating motivated reasoning that arises from a generalized disposition toward dogmatism.  And, btw, we already know self-affirmation reduces the resistance of liberal Democrats as well as conservative Republicans to ideologically noncongruent information. See here & here, for example.

Also: If we see ideologically motivated reasoning operating through sensory perception, that's a reason to be skeptical of asymmetry too. The neo-Adorno-ist dogmatic personality theory addresses responses to arguments and evidence that bears argumentatively on political positions; it is about closed-mindedness not sensory blindness. Identity-protective cognition doesn't make any claim that self-defensiveness will be limited only to assessments of arguments, and so can fit motivated reasoning effects in sight & other senses.  Research using cultural cognition has shown that motivated reasoning can generate polarization of individuals of all values when they observe video of politically charged events  (e.g., abortion-clinic vs. miltitary-recruitment center protests or high-speed police car chases). 

Lastly, if we can parsimoniously assimilate motivated reasoning in politics to a larger theory of motivated reasoning, then we should prefer that account to one that posits a patchwork of local motivated reasoning dynamics of which ideologically motivated reasoning is one. Identity-protective cognition offers us that sort of parsimony: individuals are known to react defensively against information that challenges diverse group identities -- like being the fan of a particular sports team or a student of a particular university -- and not only against information that challenges partisan or ideological identities.  The neo-Ardon-ist dogmatic personality theory doesn't explain that (although it does seem to me that Yankees fans are very closed minded & authoritarian).  Thus, more evidence, I think, for the symmetry position.

More but not conclusive evidence. For me, the question is, as I said, very much an open one.  Also, I don't mean to say that identity-protective cognition & the dogmatic-personality theories are the only ones to consider here.

The only point I am trying to make is that we are likely to get further in answering the question if we think about it in conjunction with theories of motivated cognition that offer competing predictions about symmetry and other things than if we just gather up studies & ponder correlations.

Or to put it more concisely, and on the basis of a (profound) truism from the philosophy of science: No theory, no meaningful observations.



Political psychology of misinformation at University of Minnesota

Did talk at this event, which was sponsored by University of Minnesota political science department. Here are the slides (see below for summary of what I was planning to & then did end up saying). My fellow panelists, Brendan Nyhan and Dhvan Shah, gave great talks, as did U of M's faculty commenter Paul Goren, who previewed some work he has been doing on the basic policy-choice competence of citizens who are low in political knowledge, as that concept is understood & measured in political science. It was clear that the  political psychology program there, which consisists in scholars from political science, communication & psychology, is radiating insight and passion.


Polluting the Science Communication Environment

I've been invited by the University of Minnesota political science department to make a presentation on the "political psychology of misinformation." Am mulling over what to say (have till 2:00 pm tomorrow, so no rush) & was thinking something along the lines of

  1. misinformation isn't really much of a problem unless antagonistic cultural meanings have become attached to an empirical claim about some fact that admits of scientific investigation;
  2. when such meanings have taken root, accurate information won't by itself do much good; and
  3. therefore the kind of misinformation to worry about is public advocacy that needlessly ties policy-relevant factual issues to antagonistic cultural meanings. 

Climate change is the obvious example of 3: hierarchical-individualist activists warn that concerns over it are a smoke screen to conceal a plot to overthrow capitalism,  while egalitarian-communitarian ones profer climate change as evidence of the destructiveness of capitalist greed that necessitates severe restrictions on technology & markets. The positions are reciprocal -- by supplying vivid examples of exactly the the mindset the other fears, each one actually advances the other's cause at the same time that it advances its own.

But nanotechnology risk concern furnishes an even nicer example, I think. It is, of course, sensible to investigate whether nanotechnology is hazardous, but at this point at least there's no meaningful scientific evidence that it is. Yet that hasn't stopped some advocacy groups from noisly clanging the alarm bells. Indeed, one sponsored a contest for the "best nano-free zone" symbol, with the winner to emblazoned on t-shirts, bumper stickers, etc. The contest drew some 482 entrants.

Eighty Percent of the public hasn't even heard of nanotechnology yet. This is a great way to make sure that their first exposure connects nanotchnolgoy up with politicized issues like climate change and nuclear power. This strategy for creating cultural polarization, CCP found in an experimental study, has an excellent chance to succeed.  Good to think ahead, too, since eventually climate change, like nuclear, might lose its power to divide -- and then who would need the "public interest" groups dedicated to protecting us from trying to the prospect that our cultural enemies will erect their worldview into a political orthodoxy?!

This might not be "misinformation" in the sense that the symposium sponsors have in mind -- but it is the sort of behavior that makes the public receptive to misinformation and impervious to sound science.  It is a toxin, really, in the communication environment that democracies depend on for reliable transmission of scientific knowledge to their citizens.


Democratic v. Republican Cognition

Had chance to look closely at the fascinating paper Elite Influence on Public Opinion in an Informed Electorate, American Political Science Review 105, 496-515 (2011) by my colleague John Bullock over in the Yale political science dep't.

The principal finding of the studies reported on in the article is that members of the public who identify themsleves as Demcorats and Republicans (it is important to recognize that 30% or so do not; they are independents or others) are guided less by partisan cues (in the form of the positions of elite with recognizable partisan identities) than they are by policy substance when considering new policy proposals. This is contrary the usual account of mass opinion found in political science. 

But to me, at least, the most interesting finding was one relating to "need for cognition" (NFC), a measure of the individual dispositon to engage in open-minded and effortful engagement with information.  The idea that partisan cues guide opinion predicts that cues will be even more important for low NFC individuals, who tend to use heuristic reasoning (System 1 in Kahneman terms), than than they are for high NFC ones, who can be expected to use systematic reasoning (Kahneman's system 2). Bullock found this pattern in Democrats -- that is, the ones who were high in NFC paid even more attention to policy content and less to cues than Democrats who were low in NFC. But he found the opposite for Republicans: ones who were high in NFC paid more attention to cues and less to policy content. This was totally unexpected by Bullock, who, in line with his hypothesis that reliance on cues was overstated, expected NFC not to matter very much (it didn't at all, but only if one ignored the interaction with party).

What sort of (admittedly post hoc) interpretation might we place on this finding? Some might see it as supporting the position that ideologically motivated reasoning is more characteristic of conservatives than liberals.  John Jost advances this argument in many papers, and   Chris Mooney apparently argues for it in his forthcoming book, which I'm eager to read.  Democrats, on this view, are thinking things through, Republicans reflexively adhering to ideological cues.

I don't find the "motivated reasoning asymmetry thesis" convincing. It seems to me that the balance of the evidence on politically motivated reasoning (including our own work on cultural cognition; see, e.g. "Saw a Protest") suggests that the tendency to fit perceptions of fact to one's ideological predispositions is pretty much uniform across the political spectrum (or in our work, cultural spectra). 

Bullock's finding -- as truly fascinating as it is -- is in fact ambiguous in this regard. It does seem that high NFC Democrats are paying more attention to information content than high NFC Republicans, who are focusing instead on cues. But it is question begging (or in the case of the asymmetry thesis, conclusion assuming) to think that Republicans are thus displaying motivated reasoning. Indeed, since the ones in question are high in NFC, why imagine that the Republican study subjects are processing information heuristically--or unconsciously fitting their positions to cues or anything else--when they go with the partisan elite's position? It is possible that both the high NFC Democrats and the high NFC Republicans are both using systematic (conscious, high-effort information processing) -- but for different ends. Democrats might be interested in trying to figure out what information fits their values best, in which case those with high NFC would turn their attention to information content rather than being guided (consciously or unconsciously) by partisan cues. Republicans, in contrast, might value taking the position that expressed their identity or advances their group ends more, in which case those high in NFC would consciously view the position of party elites as the more important piece of information.

It is true that Republicans would be "more partisan" on this account (one could also say Democrats are more "ideological" in some sense -- that is, more focused on advancing their values than on promoting the cause of their party). Maybe some would think that is an unattractive thing (I'm not sure; I think ideological zealotry can also be worrying in many contexts). 

But the point is that one could not, on this account, say Republicans are more prone to motivated reasoning.  We can't say because we don't know what they (or the Democrats) are trying to get out of the information here.

This point generalizes: it is impossible to say anything about the quality of cognition that individuals display unless one knows what they are trying to accomplish.  Too often in psychology, individuals who are using heuristic processing or even motivated systematic reasoning are viewed as irrational when in fact those forms of information processing are reliably advancing their interest in adopting stances that express their group identities. This is the main point of our paper on the "tragedy of the risk perceptions commons" and political conflict over climate change.

In any case, I hope Bullock is motivated (consciously or otherwise) to investigate further.



Talk at AGU 2011 Conference

Did my talk on “Cultural Cognition, Climate Change, and the Science Communication Problem”  at AGU annual meeting in SF today. Slides here.

The panel was lots of fun & the other panelists —    including USA Today’s excellent science reporter Dan Vergano, ocean scientist and marine sexologist Ellen Prager, and Molly Bentley of the Big Picture Science show — gave great talks & were really interesting to talk to. It was also an amazing honor to be involved in an AGU-sponsored event.