follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


The impact of "science consensus" surveys -- a graphic presentation

I am really really tired of this topic & am guessing everyone else is too. And for reasons stated in last couple of posts, I think a "market consensus" measure of belief in global warming would be a much more helpful way to measure and communicate the weight & practical importance of scientific evidence on climate change than any number of social science surveys of scientists or of scientific papers (I think we are up to 7 now).

But since I had occasion to construct this graphic to help a group of professional science communicators assess whether the failure to communicate scientific consensus can plausibly be viewed as the source of persistent cultural polarization over climate change in the US, I thought I'd post it.  I've included some "stills," but watch it in slide show mode if you want to get the nature of the empirical proof it embodies.

And here are the answers to the predictable questions:

1. Does that mean "scientific consensus" is irrelevant?


People of all cultural outlooks support policies they believe are consistent with scientific consensus.

But they have to figure out what scientific consensus is, which means they have to assess any evidence that is presented to them on that.

In the current climate of polarization, members of opposing cultural groups predictably credit and discredit such evidence in patterns that reinforce their belief that the scientific consensus is in fact consistent with the position that predominates in their cultural group.

Until the atagonisitic cultural meanings that motivate this selective crediting and discrediting of evidence are dispelled, just flooding the information market with more and more studies of "scientific consensus" won't do any good.

Indeed, it will only amplify the signal of cultural contestation that sustains polarization. 

Meanings first, then facts.

2. Does this mean we should ignore people who are misinforming the public?


But it means that just "correcting" misinformation won't work unless you convey affirming meanings.  

Indeed, in a state of polarized meanings, rapid-response "truth squads" also amplify polarization because they reliably convey the meaning "this is what your side believes -- and we think you are stupid!"

Meanings first, then facts!

3. Does this mean we should just give up?


The only thing anyone should give up is a style of communicating "facts" or anything else that amplifies the message that positions on climate are part of an "us-them" cultural struggle.   

The reason the US and many other liberal democracies are polarized on climate change is not that people are science illiterate or over-rely on heuristic-driven reasoning processes. It isn't that they haven't been told that human CO2 emissions increase global temperatures.  It isn't that they are being exposed to biased news reports or misled by misinformation campaigns. And it certainly isn't that no one has advised them yet about the numerous studies finding "97% of scientists ..." agree that that human activity is causing climate change.

The reason is that we inhabit a science communication environment polluted with toxic partisan meanings on climate change.

Conveying to people -- a large segment of the population in the US & in other countries too-- that accepting evidence on climate change means accepting that members of their cultural community are stupid or corrupt is itself a form of science-communication pollution.  

If you don't think that many ways of communicating "facts" (including the extent of scientific consensus on climate change) convey that meaning, then you just aren't paying attention.

If you think there's no way to communicate facts that avoids conveying this meaning, and in fact affirms the identity of culturally diverse people, you aren't thinking hard enough.


Now, getting back to disgust: we've done guns & drones; what about *vaccines*?

In a temporary triumph over entropy, I happened upon this really interesting paper -- actually, it's a book chapter -- by philosopher Mark Navin:

Navin uses an interpretive, conjectural style of analysis, mining the expression of anti-vaccine themes in popular discourse.  

I think he is likely overestimating the extent of public concern about vaccines. As Seth Mnookin has chronicled, there is definitely an "anti-vaccine" subculture, and it is definitely a menace--particularly when adherents of it end up concentrated in local communities. But they are tiny, tiny minority of the population. Childhood vaccination rate have been 90-95% (depending on the vaccine), & exemption from vaccination under 1%, for many many years without any meaningful changes.

But I don't think this feature of the paper is particularly significant or casts doubt on Navin's extraction of the dominant moral/emotional themes that pervade anti-vaccine discourse.  Disgust--toward puncturing of the body with needles and the introduction of foreign agents into the blood; toward the aspiration to substitute fabricated and self-consciously managed processes for the ones that "nature" has created for governing human health (including nurturing and protection by mothers)--unmistakably animates the sentiments of the vaccine opponents, historical and contemporary, whom Navin surveys.

There are two cool links between Navin's account & the themes explored in my previous posts.  One is the degree to which the evaluative orientation in these disgust sensibilities cannot be reduced in a satisfactory way to a "conservative" ideology or "moral" outlook.

Navin cites some popular works that suggest that anti-vaccine sentiment is correlated with a "left wing" or "liberal" political view. I've never seen any good evidence of this & the idea that something as peculiar -- as boutiquey -- as being anti-vaccine correlates w/ any widespread cultural style strikes me as implausible. But it is clear enough from Navin's account that the distinctive melange of evaluative themes that inform "disgust" with vaccines are not the sorts of things we'd expect to come out of the mouth of a typical political conservative (or typical anything, really).

This feature of the analysis is in tension with the now-popular claim in moral psychology-- associated most conspicuously with Jonathan Haidt and to a lesser degree with Martha Nussbaum -- that "disgust" is a peculiarly or at least disproportionately "conservative" moral sentiment as opposed to a "liberal" one  (frankly, I think it is odd to classify people in these ways, given how manifestly non-ideological the average member of the public is!). That was a point I was stressing in my account of the role of disgust in aversion to guns (and maybe drones, too!).

The second interesting element of Navin's account is the relationship between disgust and perceptions of harm.  Navin notes that in fact those disgusted by vaccines inevitably do put primary emphasis on the argument that vaccines are inimical to human health.  They rely on "evidence" to make out their claim. But almost certainly what makes them see harm in vaccines -- what guides them selectively to credit and discredit evidence that vaccines poison humans and weaken rather than bolster immunity -- is their disgust with the cultural meaning of vaccines.

This point, too, I think is in tension with the contemporary moral psychology view that sees "liberals" as concerned with "harm" as opposed to "purity," "sanctity" etc.  

The alternative position -- the one I argued for in my previous posts -- is that the moral sensibilities of "liberals" are guided by disgust every bit as just as much those of "conservatives," who are every bit as much as focused, consciously speaking, on "harm" as liberals are.  Both see harm in what disgusts them -- and then seek regulation of such behavior or such activities as a form of harm  prevention.  What distinguishes "liberals" and "conservatives" is only what they find disgusting, a matter that reflects their adherence to opposing cultural norms.

Although the people Navin are describing aren't really either "liberals" or "conservatives" -- and in fact don't subscribe to cultural norms that are very widespread at all in contemporary American society -- his account supports the claim that disgust is in fact a universal moral sentiment, and one that universally informs perceptions of harm.

In this respect, he is aligned with William Miller and Mary Douglas, both of whom he draws on.

Cool paper -- or book chapter!  Indeed, I'm eager to find & read the rest of the manuscript.


Money talks, & without the bias of cultural cognition: so why not listen?

Logic of prediction markets explained by professional science communicatorsGreat ongoing conversation following last post, on how market behavior furnishes alternatives to social science surveys of scientist opinion or scientific literature on weight & practical importance of science relating to climate change.  Urge others to join in, & those participating to continue.

Basically the point is this: 

1. A reflective person could understandably be uncertain how to assess the weight of scientific evidence on climate change and its practical impact (indeed, anyone who professes not to understand this proves only that he or she is not reflective).

2. Such a person can't reasonably be expected to see a social scientist's opinion survey of natural scientists or literature survey of peer-reviewed articles as settling the matter. In constructing the sample for such a survey, the social scientist has to make a judgment about which scientists or which scientific papers to include in the sample. Evaluating the adequacy of the sample-inclusion criteria used for that purposes will confront a reasonable person with issues as open to dispute as the ones that he or she would have had to resolve to assess the weight and practical significance of scientific evidence on climate change. Indeed, many of the issues will be exactly the same.

3. However, a reasonable person would see an index of securities (and like instruments) the value of which depend on global warming actually occurring  as helpful evidence in such circumstances. Market actors are economically, not ideologically motivated. Moreover, cognitive biases are likely to cancel out, leaving only the signal associated with informed assessments, by multiple rational and self-interested actors, of the weight and practical importance of the best available evidence on climate. Indeed, such a person could observe movement in the value of such instruments in relation to the publication of scientific papers or the issuances of IPCC reports etc. as measures of the soundness of those scientific assessments.

Here's another thing:

If reasonable people see that other resonable people, including ones whose priors are different from theirs, are also willing to treat an index of  as a relevant source of evidence that gives them reason to adjust their priors in one way or another (& who don't make the science-illiterate mistake of thinking that 'evidence' "proves' things as opposed to supply reason for treating a hypothsis as more or less likely to be true than one otherwise woudl have estimated), they'll be able to observe evidence of how many people are willing to proceed in this open-minded way. 

That evidence not only allows them to adjust their priors about how many people are like that; it also supplies them, as emotional and moral reciprocators, w/ reason to contribute to the common good of being a person of exactly that sort, modeling for the rest of humanity how sensible people w/ different perceptions about a matter subject to empirical investigation should proceed.

Maybe this would catch on?

So let's listen to the money people and let them lead us into a love-filled, harmonious world.

BTW, if such an index already exists, I wouldn't be surprised. I'd be surprised if it didn't.  So anyone who knows where to find it, please speak up.  

The index, btw, has to consist in securities (and the like) that reflect economic opportunities created by global warming.

It cannot include economic opportunities created by government policies to promote carbon-reduction.  That market will reflect expectations about political forces, not natural ones (a matter that might be interesting but that isn't probative of beliefs in whether climate change will occur--only in what sorts of things will occur in democratic politics, which is governed by its own peculiar laws).

Please join the discussion -- in the comment thread for the "97% of insurance companies -- & hedge funds-- agree!" post.


More market consensus on climate change: 97% of insurance companies agree (& hedge funds too!)

This is by no means the only example of "market consensus" on climate change.  

At the same time that members of the insurance industry are taking action to mitigate their losses (by promoting adaptation; the "mitigate"/"adaptation" distinction is one of the many infelicities of climate-change speak) other commercial actors are eagerly leaping at the chance to profit from new economic opportunities, including ironically exploitation of oil reserves that can be accessed more readily as polar ice caps melt.

Why isn't this activity exploited more aggressively for communication by those trying to promote public engagement with climate change? Those who doubt the scientific consensus--either because they think it is being calculated incorrectly by social scientists who use one or another method to measure it or because they think climate scientists are biased by ideology, group think, or research-funding blandishments--presumably ought to find the opinion of market actors, who are putting their money where their mouth is (actually, they don't talk much; they are too busy investing), more probative?

The answer, I conjecture, tells us something about the motivations--mainly unconscious, of the cultural cognition sort--of those on both sides of the debate.

Too many climate-change advocates have a hard time seeing/using evidence of this sort because it involves mining insight (as it were; new mining opportunities are also being created by metling permafrost) from the rationality of market behavior, not to mention recognizing that climate change does in fact involve a balance of positive and negative effects, even if on balance it is negative.  

At the same time, too many climate skeptics are unwilling to acknowledge evidence of any sort--even the truth-corroborating price signal of self-interested market behavior!--that lends credence to the scientific underpinnings of those who are making the case for effective collective action to avoid the myriad welfare-threatening upshots of a warming earth. So this evidence doesn't register on them either.
Click me!
Might this be it?

If so, I suppose we should look on the bright side: the two sides are agreeing on something, even if it is simply to ignore one and the same piece of evidence on account of it not fitting their respective worldviews.

On the science communication value of communicating "scientific consensus": an exchange

So either (1) I am a genius in communication after all (P = 0.03), having provoked John Cook and Scott Johnson to offer thoughtful reflections by strategically feigning a haughty outburst (I acknowledge that I expressed my frustration in a manner that I am not proud of). Or (2) Cook & Johnson are sufficiently motivated by virtuous commitment to intellectual exchange to create one notwithstanding my bad manners (P = 0.97).  

I don’t propose we conduct any sort of experiment to test these competing hypotheses but instead just avail ourselves of our good fortune.

To enable them to have an expression of my position that admits of and is worthy of reasoned response, I’ve reduced the source of my exasperation/frustration with the Cook et al. study to 4 points.  John and Scott’s replies (reflecting their points of view as a scholar of science communication and a science journalist, respectively), follow. 

What should follow that, I hope, are additional reflections and insights from others in the “comments” thread.


1. Scholarly knowledge. The Cook et al. study, which in my view is an elegantly designed and executed empirical assessment, doesn’t meaningfully enlarge knowledge of the state of scientific opinion on climate change. The authors find that 97% of the papers published in peer-reviewed journals between 1991 and 2011 “endorsed” the “scientific consensus” view that human activity is a source of global warming. They report further that a comparable percentage of scientists who authored such papers took that position....

continue reading


Many thanks to Dan Kahan for the opportunity to discuss this important (and fascinating) issue of communicating the scientific consensus. I fully concur with Dan’s assertion that we need to be evidence-based in how we approach science communication. Indeed, my PhD research is focused on the very issue of attitude polarization and the psychology of consensus. The Cultural Cognition project, particularly the paperCultural Cognition of Scientific Consensus, has influenced my experiment design. I’m in the process of analysing data that I hope will guide us towards effective climate communication.... 

continue reading


Let me preface this by laying out my biases. I’m thinking about more than just this study/story, though I did cover it. (So there’s that.) I like to cover new studies, and I’d rather not hear that the hard work I put in to that end is pointless, so I’m reacting to Dan’s opinion as it relates to media coverage of studies like this. As an educator with a science background, I also have deficit model motivations—even as I understand that buckets aren’t lining up to be filled and that many are equipped with strainers and sometimes check valves. I am still, in essence, a pourer of what I judge to be useful knowledge. If I didn’t think that was the case, I’m not sure why I’d be trying to communicate (unless it somehow made for lucrative reality television, I guess)....

Continue reading


Cultural resistance to the science of science communication

I’m in Norway. Just stepped off the plane in fact.

Am going to be giving an address at a conference sponsored by the Center for International Climate and Environmental Research in Oslo. The conference is for professional science communicators (mainly ones associated with universities), and the topic is how to promote effective public dissemination of and engagement with the IPCC's 5th Assessment Report, which will be released officially in October.

Obviously, I will stress that it all comes down to making sure the public gets the message that  the IPCC report reflects “scientific consensus.”

Actually, I will try to communicate something that is very hard to make clear.

When I have the opportunity (and privilege) to address climate scientists and professional science communicators, I often feel that I’m deflating them a bit by advising them that I don’t believe that what scientists say—independently of what they do—is of particular consequence in the formation of public opinion. The average American can’t name a Supreme Court Justice. Say “James Hansen” and he or she is more likely to select “creator of the Muppets” than “climate scientist” on a multiple choice quiz.  Anyone who thinks things could or should be otherwise, moreover, doesn’t have a clue what it is like to be a normal, average, busy person.

There are some genuinely inspired citizen scientist communicators in our society. But to expect them to bear the burden of fixing the science communication problem betrays a naïve—and pernicious—model of how science is communicated.

What’s known to science becomes known to ordinary people—ones to whom what science knows can in fact be quite vital—through a dense network of cultural intermediaries. Moreover, in pluralistic liberal democracies (which are in fact the only types of society in which science can flourish), there will necessarily be a plurality of such networks operating to inform a diverse array of groups whose members share distinctive cultural commitments.

These networks by and large all do a great job. Any that didn’t—any that consistently misled its members about what’s known to science—wouldn’t last long, given the indispensable contribution scientific knowledge makes to human welfare.

The spectacle of cultural conflict over what’s known to science is a pathology—both in the sense of being inimical to human well-being and in the sense of being rare. The number of health- and policy-relevant scientific insights on which there is conflict akin to that over climate science is miniscule relative to the vast number on which there isn’t.

Something has to happen—something unusual—to invest a particular belief about some otherwise mundane issue of fact with cultural meanings that express one’s membership in and loyalty to a particular group.

But once that happens, the value that an ordinary member of the public gets from persisting in a belief that signifies his or her group commitments will likely far outweigh any personal cost from being mistaken. Clearly this is so for climate change: nothing an ordinary person believes about the science of climate change will have any impact on the climate—or any impact on policies to offset any adverse impact human activity might be having on it—because he or she just doesn’t matter enough (as consumer, as voter, as “public deliberator”) to have any impact; if he or she takes the “wrong” position relative to the one that signifies loyalty to his or her cultural group, and the amount of suffering that person has to endure can be immense.

The pathology of cultural conflict over a societal risk like climate change can’t be effectively treated, then, by radiating the patient with a bombardment of “facts.”

It can be treated only with the creation of pluralistic meanings. What needs to be communicated is that the facts on climate change, whatever they might be, are perfectly consistent with the cultural commitments of all the diverse groups that inhabit a pluralistic liberal democracy.  No one has to choose between believing them (or believing anything whatsoever about them) and being who one is as a person with a particular cultural identity.

As I said, communicating this point about science communication is difficult.  Not so much because the ideas or the concepts—or the evidence that shows they are more than a just-so story—are all that hard to explain.

The problem has to do with a kind of cultural resistance to the message that communicating science is about protecting the conditions in which the natural, spontaneous social certification of truth can be expected to happen.

The culture that resists this message, moreover, is not that of “hierarchical individualists” or “egalitarian communitarians.”

It’s the culture of the Liberal Republic of Science, of which we are all citizens.

Nullius in verba.  It’s so absurd! Yet so compelling. So much who we are.


What is to be done? 

A thoughtful commentator sent me this email:

I was reading through the sublinks [in Andy Revkin's "The Other Science Gap" column] with interest tonight, but also growing frustration-- as in I can understand and agree with you and others focusing on the role partisanship and social cognitive barriers play, but I am a guy who lives in the trenches and wants to know--are there any solutions? I urged my climate law students this month to be advocates and not give up despite all the pessimistic news, and I keep speaking out at conferences and in articles on steps to get more clean energy more quickly--but it often seems like way too little and increasingly too late. What do you say to students and other young people about how to work to change climate change's momentum and trends?

By way of background (just a tiny bit) , the occasion for the query is the "here we go again" exasperated response to the new study that corroborates years and years of previous studies finding that there is a scientific consensus -- consistently calculated by a variety of methods as 97% of scientists, peer-reviewed articles, etc.--that human activity is the cause of climate change.

The exasperation, of course,  is not over the content of the study; it is over the fallacious inference that communicating the "97% of scientists believe ..." message is an effective way to dispel public controversy over climate change.  

If it were, then the controversy would have been solved by now.  "Scientific consensus" has been the dominant theme of climate communication for the better part of a decade.  And cultural polarization over that time has not abated--it has only intensified.

Empirical studies aimed at trying to make sense of this phenomenon have concluded that the reason the public remains divided on “scientific consensus” isn’t that they haven’t been exposed to evidence on the matter but rather that when they are exposed to evidence of what experts believe they selectively credit or discredit it in patterns that reflect and reinforce their perception that scientific consensus is consistent with the position that predominates in their cultural or ideological group

The exuberance with which the latest "97%" study has been greeted by many of those who want to promote constructive engagement with climate science reflects a distressing resistance to take in the more general "scientific consensus" that exists among science of science communication researchers that neither a deficit in knowledge of facts -- ones relating to the science of climate as well as ones relating to the extent of scientific consensus -- nor a deficit in the ability to make sense of scientific information is the source of continuing conflict over climate change.  Indeed, members of the public who are the most science literate and numerate are the most polarized.

But for those who are willing to open their eyes and unblock their ears to the real-world and social-scientific evidence that a public knowledge/rationality deficit is not the problem, the question is then put, as it is by the commentator: so what is to be done?

The answer is all kinds of things. Or in any case, the same research that supports the conclusion that "fact bombardment" doesn't work is filled with findings of alternatives that work better in promoting constructive open-minded engagement with scientific information. By adroitly combining valid information with culturally affirming meanings, these communications succeed in getting people to reflectively assess evidence that they might otherwise dismiss out of hand (btw, if your goal is not simply to get people to open-mindedly consider evidence using their own powers of reason -- if you just want to make them believe something, who cares how-- you are not a science communicator; you are a propagandist).

That some think that continuing to hammer skeptics over the head with "scientific consensus" -- a style of advocacy that is more likely to intensify opposition, research shows, then ameliorate it -- because there is no alternative is part and parcel of the same puzzling evidence-resistance that explains the continuing allure of the "knowledge/rationality deficit" theory of science communication.

Actually, there are plenty of science communicators who are aware of this research and who make skillful use of it.  Katharine Hayhoe and Geoffrey Haines-Stiles, and George Marshall are among them. So one piece of advice: check out what they are doing and  try to figure out how to adapt and extend it.

But here's another piece of advice: use scientific methods to test and refine communication strategies.

It's ironic that it's necessary to say this.  But it is.  It really really really is.

Not only do too many science communicators ignore evidence about what does and doesn't work.  Way way too many also shoot from the hip in a completely fact-free, imagination-run-wild way in formulating communication strategies.

If they don't rely entirely on their own personal experience mixed with introspection, they simply reach into the grab bag of decision science mechanisms (it's vast), picking and choosing, mixing and matching, and in the end presenting what is really just an elaborate just-so story on what the "problem" is and how to "solve" it.  

That's not science. It's pseudo-science.

As with most complicated matters in human affairs, there are more plausible conjectures about what the problem is then can possibly be true.  Use of disciplined methods of observation and inference to test rival hypotheses (such as the "knowledge deficit" theory vs. "motivated reasoning," of which "cultural cognition"is a form).

But once one has used evidence-based methods to identify mechanisms that plausibly can be understood to be generating the problem, there will still be more plausible conjectures than can be true about what sort of communication strategies can be used to neutralize or turn those mechanisms around in a way that promotes constructive engagement.

The only way to extricate the latter from the vast sea of the former is through more evidenced-based methods, ones aimed at reproducing in the field effects observed in the lab.  Unless we use science to identify how to communicate science, we will drown in an ocean of just-so story-telling.

Those who are willing to consider real evidence on what works and what doesn't will find many answers to the "what is to be done?" question in the science of science communication.

But it is important for them to recognize that the most important thing that that science has to tell them is not what to do (indeed, be wary of cartoonish "how to" communication "manuals").

It's how to do it: by the formulation, testing, analysis, and revision of evidence-informed hypotheses.

Or simply put, by being scientific about communicating science.




More conversation -- & an announcment of my commitment to the same

There are a lot of interesting conversations going in the comments section following my post on the new study on the extent of scientific consensus on climate change.

Indeed, it's all much more interesting than anything I "said" in the post, which I think was deficient (particularly in the material before the "update" field) in the quantity of reasoned reflection, and the quality of constructive engagement, that usually are necessary to get a worthwhile exchange of views going. So thanks to the commentators for supplying those materials.

I will have more to say, in the comments & in a follow up post.  But there's one point that I do want to make now & to "elevate" in effect.  

It's that I regard the authors of the scientific consensus study as serious scholars whose work is motivated by a very appropriate synthesis of scholarly and public aims. I think it's likely they and I disagree about certain issues relating to science communication. But if so, those are the sorts of disagreements that people with a shared commitment to understanding complicated matters are bound to have; indeed, they are the sorts of disagreements that are the occasion for reasoned exchange among those who recognize that their common interest in gaining knowledge is best advanced by the dialectic of conjecture and refutation that is the signature of scientific inquiry.

No one should regard the manner in which I expressed myself as implying that I regard the authors as people whom I see as unworthy of being engaged in exactly that way.  If anyone did get that impression from how I expressed myself, then what he or she should infer is that I am not always as discriminating as I ought to be in judging the counsel of my passions.


Annual "new study" finds 97% of climate scientists believe in man-made climate change; public consensus sure to follow once news gets out

Hey! Did you hear? A new study shows that 97% of scientists believe that human activity is responsible for climate change!

We all need to be sure this new information gets reported far and wide -- not only because it is genuinely newsworthy, a true addition to what's known about the state of scientific opinion -- but also because public unawareness of this degree of consensus surely explains cultural polarization over climate change.

The ugly, demeaning, public-welfare-enervating debate will be over soon!

Why didn't anyone think of telling the public about this before now?!




Motivated reasoning & its cognates

The following is an excerpt from Kahan, D.M. Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1-77 (2011). I thought it might be useful to reproduce it here, both for its own sake and for reference (via hyperlink) in future blog entries, since  many of the concepts it describes are recurring ones in my posts. This entry contains a modest number of hyperlinks; the printed version (accessible via SSRN), is amply footnoted! 

1.  Generally. Motivated reasoning refers to the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.  They Saw a Game, a classic psychology article from the 1950s, illustrates the dynamic.  Experimental subjects, students from two Ivy League colleges, were instructed to watch a film that featured a set of controversial officiating calls made during a football game between teams from their respective schools.  What best predicted the students’ agreement or disagreement with a disputed call, the researchers found, was whether it favored or disfavored their schools’ team.  The researchers attributed this result to motivated reasoning: the students’ emotional stake in affirming their commitments to their respective institutions shaped what they saw on the tape.

The end or goal motivates cognition in the sense that it directs mental operations — in this case, sensory perceptions; in others, assessments of the weight and credibility of empirical evidence, or performance of mathematical or logical computation — that we expect to function independently of that goal or end.  Indeed, the normal connotation of “motive” as a conscious goal or reason for acting is actually out of place here.  The students wanted to experience solidarity with their institutions, but they didn’t treat that as a conscious reason for seeing what they saw.  They had no idea (or so we are to believe; one needs a good experimental design to be sure this is so) that their perceptions were being bent in this way.

Although the students in this study probably would not have been distressed to learn that their perceptions had been covertly recruited by their desire to experience solidarity, there can be other contexts in which motivated cognition subverts an actor’s conscious ends.  This might be so, for example, when a person who genuinely desires to be make a fair or accurate judgment is unwittingly impelled to make a determination that favors some personal interest, pecuniary or social.

2.  Identity-Protective Cognition. The goals or needs that can motivate cognition are diverse.  They include fairly straightforward things, like a person’s financial or related interests.  But they reach more intangible stakes, too, such as one’s need to sustain a positive self-image or the desire to promote states of affairs or other goods that reflect one’s moral values.

Affirming one’s membership in an important reference group — the unconscious influence that operated on the students in the They Saw A Game experiment — can encompass all of these ends simultaneously.  Individuals depend on select others — from families to university faculties, from religious denominations to political parties — for all manner of material and emotional support.  Propositions that impugn the character or competence of such groups, or that contradict the groups’ shared commitments, can thus jeopardize their individual members’ well-being.  Assenting to such a proposition him- or herself can sever an individual’s bonds with such a group.  The prospect that people outside the group might credit this proposition can also harm an individual by reducing the social standing or the self-esteem that person enjoys by virtue of his or her group’s reputation.  Individuals thus face psychic pressure to resist propositions of that sort, generating a species of motivated reasoning known as identity-protective cognition.

Identity-protective cognition, like other forms of motivated reasoning, operates through a variety of discrete psychological mechanisms.  Individuals are more likely to seek out information that supports than information that challenges positions associated with their group identity (biased search).  They are also likely selectively to credit or dismiss a form of evidence or argument based on its congeniality to their identity (biased assimilation).  They will tend to impute greater knowledge and trustworthiness and hence assign more credibility to individuals from within their group than from without.

These processes might take the form of rapid, heuristic-driven, even visceral judgments or perceptions, but they can influence more deliberate and reflective forms of judgment as well.  Indeed, far from being immune from identity-protective cognition, individuals who display a greater disposition to use reflective and deliberative (so-called “System 2”) forms of reasoning rather than intuitive, affective ones (“System 1”) can be expected to be even more adept at using technical information and complex analysis to bolster group-congenial beliefs.

3.  Naïve Realism. Identity-protective cognition predictably impedes deliberations, negotiations, and like forms of collective decisionmaking.  When collective decisionmaking turns on facts or other propositions that are understood to bear special significance for the interests, standing, or commitments of opposing groups (for example, those who identify with the respective sides in the Israel-Palestine conflict), identity-protective cognition will predictably exaggerate differences in their understandings of the evidence.  But even more importantly, as a result of a dynamic known as “naïve realism,” each side’s susceptibility to motivated reasoning will interact with and reinforce the other’s.

Naïve realism refers to an asymmetry in the ability of individuals to perceive the impact of identity-protective cognition.  Individuals tend to attribute the beliefs of those who disagree with them to the biasing impact of their opponents’ values.  Often they are right.  In this respect, then, people are psychological “realists.”  Nevertheless, in such situations individuals usually understand their own factual beliefs to reflect nothing more than “objective fact,” plain for anyone to see.  In this regard, they are psychologically naïve about the contribution that group commitments make to their own perceptions.

Naïve realism makes exchanges between groups experiencing identity-protective cognition even more divisive.  The (accurate) perception that a rival group’s members are reacting in a closed-minded fashion naturally spurs a group’s members to express resentment — the seeming baselessness of which provokes members of the former to experience and express the same.  The intensity, and the evident polarization, of the disagreement magnifies the stake that individuals feel in defending their respective groups’ positions.  Indeed, at that point, the debate is likely to take on meaning as a contest over the integrity and intelligence of those groups, fueling the participants’ incentives, conscious and unconscious, to deny the merits of any evidence that undercuts their respective views.

4.  “Objectivity.” As naïve realism presupposes, motivated reasoning is an instance of what we commonly recognize as rationalization.  We exhort others, and even ourselves, to overcome such lapses — to adopt an appropriate stance of detachment — in settings in which we believe impartial judgment is important, including deliberations or negotiations in which vulnerability to self-serving appraisals can interfere with reaching consensus.  What most people don’t know, however, is that such admonitions can actually have a perverse effect because of their interaction with identity-protective cognition.

This is the conclusion of studies that examine whether motivated reasoning can be counteracted by urging individuals to be “objective,” “unbiased,” “rational,” “open-minded,” and the like.  Such studies find that individuals who’ve been issued this type of directive exhibit greater resistance to information that challenges a belief predominant within their defining groups.  The reason is that objectivity injunctions accentuate identity threat.  Individuals naturally assume that beliefs they share with others in their defining group are “objective.”  Accordingly, those are the beliefs they are most likely to see as correct when prompted to be “rational” and “open-minded.”  Indeed, for them to change their minds in such a circumstance would require them to discern irrationality or bias within their group, an inference fraught with dissonance.

For the same reason, emphasizing the importance of engaging the issues “objectively” can magnify naïve realism.  As they grow even more adamant about the correctness of their own group’s perspective, individuals directed to carefully attend to their own impartiality become increasingly convinced that only unreasoning, blind partisanship can explain the intransigence of the opposing group.  This view triggers the reciprocal and self-reinforcing forms of recrimination and retrenchment that are the signature of naïve realism.

5.  Cultural Cognition. Disputes set in motion by identity-protective cognition and fueled by naïve realism occupy a prominent place in our political life.  Such conflicts are the focus of the study of cultural cognition.

Cultural cognition refers to the tendency of individuals to conform their perceptions of risk and other policy-consequential facts to their cultural worldviews.  Cultural worldviews consist of systematic clusters of values relating to how society should be organized.  Arrayed along two cross-cutting dimensions — hierarchy/egalitarianism and individualism/communitarianism — these values supply the bonds of affinity groups, membership in which motivates identity-protective cognition.  People who subscribe to a relatively hierarchical and individualistic worldview, for example, tend to be dismissive of environmental risk claims, acceptance of which would justify restrictions on commerce and industry, activities they value on material and symbolic grounds.  Individuals who hold egalitarian and communitarian values, in contrast, are morally suspicious of commerce and industry, which they see as sources of social disparity and objects of noxious self-seeking.  They therefore find it congenial to believe that commerce and industry pose harms worthy of constraining regulations.  Experimental work has documented the contribution of cultural-cognition worldviews to various discrete mechanisms of motivated cognition, including biased search and assimilation, perceptions of expertise and credibility, and brute sense impressions.

Methods of cultural cognition have also been used to measure controversy over legally consequential facts.  Thus, mock jury studies have linked identity-protective cognition, motivated by the cultural worldviews, to conflicting perceptions of the risk posed by a motorist fleeing the police in a high-speed chase; of the consent of a date rape victim who said “no” but did not physically resist her assailant; of the volition of battered women who kill in self-defense; and of the use of intimidation by political protestors.  To date, however, no studies have directly tested the impact of cultural cognition on judges.

6.  Cognitive Illiberalism. Finally, cognitive illiberalism refers to the distinctive threat that cultural cognition poses to ideals of cultural pluralism and individual self-determination.  Americans are indeed fighting a “culture war,” but one over facts, not values.

The United States has a genuinely liberal civic and political culture — born not of reflective commitment to cosmopolitan ideals but of bourgeois docility.  Media spectacles notwithstanding, citizens generally don’t have an appetite to impose their worldviews on one another; they have an appetite for SUVs, big houses, and vacations to Disneyland (or Las Vegas).  Manifested in the absence of the sectarian violence that has filled human history and still rages outside the democratic capitalist world, there is effective consensus that the state should refrain from imposing a moral orthodoxy and confine policymaking to attainment of secular goods — safety, health, security, and prosperity — of value to all citizens regardless of their cultural persuasion.

As much as they agree about the ends of law, however, citizens are conspicuously — even spectacularly — factionalized over the means of attaining them.  Is the climate heating up as a result of human activity, and if so will it pose any dangers to us?  Will permitting citizens to carry concealed handguns in public increase violent crime — or reduce it?  Would a program of mandatory vaccination of schoolgirls against HPV promote their health by protecting them from cervical cancer — or undermine it by lulling them into unprotected sex, increasing their risk of contracting HIV?  Answers to questions like these tend to sharply polarize people of opposing cultural outlooks.

Divisions along these lines are not due to chance, of course; they are a consequence of identity-protective cognition.  The varying emotional resonance of risk claims across distinct cultural communities predisposes their members to find some of these claims more plausible than others, a process reinforced by the tendency of individuals to seek out and credit information from those who share their values.

Far from counteracting this effect, deliberation among diverse groups is likely to accentuate polarization.  By revealing the correlation between one or another position and one or another cultural style, public debate intensifies identity-protective pressure on individuals to conform to the views dominant within their group.

Liberal discourse norms constrain open appeals to sectarian values in debates over the content of law and policy.  But our political culture lacks any similar set of conventions for constraining the tendency of policy debates to build into rivalries among the members of groups whose members subscribe to competing visions of the best life.  On the contrary, one of the central discourse norms employed to steer law and policymaking away from illiberal conflicts of value plays a vital role in converting secular policy debates into forms of symbolic status competition.

The injunction of liberal public reason makes empirical, welfarist arguments the preferred currency of argumentative exchange.  The expectation that participants in public deliberations will use empirical arguments tends to confine their advocacy to secular ends; it also furnishes observable proof to the advocate and her audience that her position is not founded on an ambition to use the law to impose her own partisan view of the good.

Psychologically, however, the injunction to present culturally neutral empirical grounds for one’s position has the same effect as an “objectivity” admonition.  The prospect that one’s empirical arguments will be shown to be false creates the identity-threatening risk for her that she or others will come to form the belief that her group is deluded and, in fact, committed to propositions inimical to the public welfare.  In addition, the certitude that empirical arguments convey — “it’s simply a fact that . . . ”; “how can they deny the scientific evidence on . . . ?” — arouses suspicions of bad faith or blind partisanship on the part of the groups advancing them.  Yet when members of opposing groups attempt to rebut such arguments, they are likely to respond with the same certitude, and with the same lack of awareness that they are being impelled to credit empirical arguments to protect their identities.  This form of exchange — the signature of naïve realism — predictably generates cycles of recrimination and resentment.

When policy debates take this turn, both sides know that the answers to the questions they are debating convey cultural meanings.  The positions that individuals take on whether the death penalty deters, whether deep geologic isolation of nuclear wastes is safe, whether immigration reform will boost the economy or put people out of work, and the like express their defining commitments and not just their beliefs about how the world works.  Whose answer the state credits — by adopting one or another policy — elevates one cultural group and degrades the other.  Very few citizens are moral zealots.  But to protect the status of their group and their own standing within it, moderate citizens are conscripted, against their conscious will, into a divisive struggle to control the expressive capital of law.


Bolsen, Druckman & Cook working paper addresses critical issue in Science of #Scicom: What triggers public conflict over policy-relevant science?

Here's something people interested in the science of science communication should check out:

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series, WP-13-11 (May 1, 2013). 

The paper presents an interesting study on how exposure to information on the existence of political conflict affects public attitudes toward policy-relevant science, including the interaction of such exposure to information on "scientific conensus."
I think this is exactly the sort of research that's needed to address the "science communication problem." That's the term I use to refer to the failure of valid and widely accessible science to quiet public controversy over policy-relevant facts (including risks) to which that evidence directly speaks.
Most of the research in this area examines how to dispel such conflict.  Likely this is a consequence of the salience of the climate change controversy and the impact it has had in focusing attention on the "science communication problem" and the need to integrate science-informed policymaking with the science of science communication. 

But as I've emphasized before, the focus on resolving such conflict risks diverting attention from what I'd say is the even more important question of how the "science communication problem" takes root. 

The number of issues that display the science communication problem's signature form of cultural (or political) polarization is very small relative to the number of issues that could. Something explains which issues end up afflicted with this pernicious pathology and which don't. 

If we can figure out what triggers the problem, then we can examine how to avoid it. That's a smart thing to do, becaues it might well be easier to avoid cultural polarization than to vanquish it once it sets in. 

For an illustration, consider the HPV vaccine.  As I've explained previously, the conditions that triggered the science communication problem there could easily have been anticipated and avoided. The disaster that occurred in the introduction of the vaccine stunningly illustrate the cost of failing systematically acquire and use the insight that the science of science communication can afford. 

The BDC paper is thus really heartening, because it focuses exactly on the "anticipation/avoidance" objective. It's the sort of research that we need to devise an effective science communication environment protection policy

I'll say more about the substance of the studey on another occasion, likely in connection with a recap of my Science of Science Communication course's sessions on emerging technology (which featured another excellent Druckman/Bolsen study). 

But if others want to say what they think of the study -- have at it!


Is disgust "conservative"? Not in a Liberal society (or likely anywhere else)

This is a popular theme.

It is associated most prominently with the very interesting work of Jonathan Haidt, who concludes that "disgust" is characteristic of a "conservative" psychological outlook that morally evaluates behavior as intrinsically appropriate or inappropriate as opposed to a liberal one that focuses on "harm" to others.

Martha Nussbaum offers a similar, and similarly interesting account, portraying "disgust" as a sensibility that ranks people (or ways of living associated with them) in a manner that is intrinsically hierarchical.  Disgust has no role to play in the moral life of a modern democratic citizen, she concludes. 

But I can't help but thinking that things are slightly more complicated -- and as a result, possibly much more interesting! -- than this.

Of course, I'm thinking about this issue because I'm at least momentarily obsessed with the role that disgust is playing in public reactions to the death of a 2-year-old girl in Kentucky, who was shot by her 5-year-old brother who was "playing" with his "Crickett," a miniaturized but authentic and fully operational .22 caliber rifle marketed under the slogan "my first gun!"

The Crickett disgusts people. Or so they say-- over & over. And I believe them. I believe not only that they are experiencing a "negative affective reaction" but that what they are feeling is disgust.  Because I am experiencing that feeling, too, and the sensibility really does bear the signature elements of disgust.

I am sickened by the images featured in the manufacturer's advertising: the beaming, gap-toothed boy discovering a Crickett when he tears open a gift-wrapped box (likely it is his birthday; "the first gun" ritual is the "bar mitzvah of the rural Southern WASP," although he is at least 3 yrs south of 13); the determined elementary school girl taking aim with the model that has the pink faux-wood stock; the envious neighbor boy ("I wish I had one!"), whose reaction is geared to fill parents with shame for putting their son at risk of being treated as an outcast (yes, their son; go ahead & buy your tomboy the pink-stock Crickett, but if she prefers, say, to make drawings or to read about history, surely she won't be mocked and derided).

These images frighten me. They make me mad.  And they also truly—literally—turn my stomach.

I want to bury the Crickett, to burn it, destroy it. I want it out of my sight, out of anyone's, because I know that it--and what it represents--can contaminate the character, corrupt it.

I'm no "conservative" and neither is anyone else whom I observe (they are all over the place) expressing disgust toward the Crickett.

But of course, this doesn’t mean "liberals" (am I one? I suppose, though what passes for “liberal” in contemporary political discourse & a lot of scholarly discourse too is so philosophically thin and so historically disconnected that it demeans a real Liberal to see the inspired moral outlook he or she has inherited made to bear the same label. More on that presently) have forgotten the harm principle.

The harm guns cause to others-- just look at the dead 2 yr old girl in Kentucky, for crying out loud!--not the "disgust" they feel toward them is the reason they want to ban—restrict them!

Yes, and it's why they have historically advocated strict regulation (outright banning, if possible) of swimming pools, which are orders of magnitude more lethal for children . . . .

And why President Obama is trying so hard to get legislation passed that would get America out of the "war on drugs," the collateral damage of which includes many, many times more kids gunned down in public than died in Newtown. . . .

Look:  “liberals” want to enact background checks, ban assault rifles, prohibit carrying concealed handguns because they truly, honestly believe that these measures will reduce harm.

But they truly, honestly believe these things--despite the abundant evidence that such measures will have no meaningful impact on homicide, and are certain to do less than many many other things they ignore -- because they are disgusted by guns. 

We impute harm to what disgusts us; and we are disgusted by behavior that violates the moral norms that we hold in common with others and that define our understanding of the best way to live.

The "we" here, moreover, is not confined to "liberals."  

"Conservatives" are in the same motivated-reasoning boat. They are "disgusted" by all kinds of things--drugs, homosexuality, rap music (maybe even drones!).  But they say we should "ban"/"control" etc. such things because of the harms they cause.  

It's not characteristic of ordinary people who call themselves "conservatives"  that they see violation of "sacred" norms as a ground for punishing people independently of harm. Rather it's characteristic of them to see harm in what disgusts them. Just as "liberals" do! 

The difference between "liberals" and "conservatives" is in what they find disgusting, and hence what they see as harmful and thus worthy of legal restriction.

Or at least that is what many thoughtful scholars -- like Mary Douglas, William Miller, Roger Giner-Sorrolla, among others.

Our study of cultural cognition is, of course, inspired by this basic account, and although we haven't (so far) attempted to include observation and measurement of disgust or other identifiable moral sensibilities in our studies, I think our results are more in keeping with this position than with any that sees "conservativism" as uniquely bound up with "disgust" -- or with any that tries to explain the difference in the perceptions of risk of ordinary people with reference to moral styles that consciously place varying degrees of importance on "harm."

I wouldn't say, of course, that the Haidt-Nussbaum position (let's call it) has been "disproven" etc.  This work is formidable, to say the least! Whether there are differences in the cognitive and emotional processes of "liberals" and "conservatives" (as opposed to differences in the norms that orient those processes) is an important, difficult question that merits continued thoughtful investigation.

Still, it is interesting to reflect on why accounts that treat "liberals" as concerned with "harm" and "conservatives," alone, as concerned with or motivated by "disgust" are as popular as they are—not among psychologists or others who are able and who have made the effort to understand the nature of the evidence here but among popular consumers of such work who take the “take away” of it uncritically, without reflection on the strength of the evidence or cogency of the inferences to be drawn from it (this is sad; it is a reflection of a deficit in ordinary science intelligence).

Here's a conjecture: because we are all Liberals.  

I’m not using the term “Liberal” in this sense to refer to points to the left of center on the 1-dimensional right-left spectrum that contemporary political scientists and psychologists use to characterize popular policy preferences.

The Liberalism I have in mind refers to a distinctive understanding of relationship between the individual and the state. What’s distinctive about it, in fact, is that individuals comes first. The apparatus of the state exists to secure the greatest degree of equal liberty for individuals, who aside from their obligation to abide by laws that serve that end must be respected as free to pursue happiness on terms of their own choosing.

The great mass of ordinary people who call themselves “conservatives” in the US (and in Australia, in the UK, in France, Germany, Canada . . .) are as committed to Liberalism in this sense as are those call themselves “liberals” (although in fact, the great mass of people either don’t call themselves “conservative” or “liberal” or, if they do, don’t really have any particular coherent idea of what doing so entails). They are so perfectly and completely committed to Liberalism that they can barely really conceive of what it would look like to live in a political regime with a different animating principle.

The currency of disgust is officially valueless in the Liberal state’s economy of political justification. Under the constitution of the Liberal State, the offense one group of citizens experience in observing or knowing that another finds satisfaction in a way of life the first finds repulsive is not a cognizable harm.

We all know this—better, just are this, whether or not we “know” it; it’s in the nature of a political regime to make its animating principle felt even more than “understood.” And we all honestly believe that we are abiding by this fundamental principle when we demand that behavior that truly disgusts us—the practice of same-sex or polygamous marriage, the consumption of drugs, the furnishing of a child with a “Crickett,” and the like—be prohibited not because we find it revolting but because it is causing harm.

As a result, the idea that we are unconsciously imputing “harm” selectively to what disgusts us (or otherwise offends sensibilities rooted not in our commitment to avoiding harm to others but in our commitment to more culturally partisan goods) is unsettling, and like many unsettling things a matter we tend to discount.

At the same time, the remarkable, and everywhere perfectly obvious congruence of the disgust sensibilities and perceptions of harm formed by those who hold cultural and political commitments different from our own naturally suggests to us that those others are either attempting to deceive us or are in fact deceiving themselves via a process of unconscious rationalization.

This is in fact a process well known to social psychology, which calls it “naïve realism.”  People are good at recognizing the tendency of those who disagree with them to fit their perceptions of risk and other facts related to contested policy issues to their values and group commitments. Ordinary people are realists in this sense. At the same time, they don’t readily perceive their own vulnerability to the very same phenomenon. This is the naïve part!

Here, then, people with “liberal” political outlooks can be expected to credit work that tells them that “conservatives” are uniquely illLiberal—that “conservatives,” as opposed to “liberals,” are consciously or unconsciously evaluating behavior with a morality that is guided by disgust rather than harm.

All of this is separate, of course, from whether the work in question is valid or not. My point is simply that we can expect findings of that sort to be accepted uncritically by those whose cultural and political predispositions it gratifies.

Would this be so surprising?  The work in question, after all, is itself applying the theory of “motivated cognition,” which predicts this sort of ideologically selective assessment of the strength of empirical evidence.

Still, that motivated reasoning would generate, on the part of the public, an ideological slant in the disposition to credit evidence that ilLiberal sensibilities disproportionately guide the moral judgments of those whose ideology one finds abhorent (disgusting, even) is, as I indicated, only a conjecture. 

In fact, I view the experiment that I performed on cognitive reflection, ideology and motivated reasoning as effectively modeling this sort of process. 

But like all matters that admit of empirical assessment, the proposition that ideologically motivated reasoning will create support for the proposition that aspects of it—including the cognitive force of “disgust” in orientating perceptions of harm—is ideologically or culturally asymmetric is not something that can be conclusively established by a single empirical study—indeed, is not something that can ever be “conclusively” settled but rather a matter on which beliefs must always be regarded as provisional and revisable in light of whatever the evidence might show.

In the meantime, we can enjoy the excellent work of scholars like Haidt and Nussbaum, and the competing positions of theorists and empiricists like Miller, Douglas, and Giner-Sorrolla, as compensation for having to enduring the depressing spectacle of cultural polarization over matters like guns, climate change, nuclear power, the HPV vaccine, drugs, unorthodox sex practices. . . etc. etc.

(Some) references:

Douglas, M. Purity and danger; an analysis of concepts of pollution and taboo. (Praeger, New York,; 1966).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing Under Defense Motivation. Pers Soc Psychol B 23, 84-97 (1997).

Giner-Sorolla, R., Chaiken, S. & Lutz, S. Validity beliefs and ideology can influence legal case judgments differently. Law Human Behav 26, 507-526 (2002).

Graham, J., Haidt, J. & Nosek, B.A. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96, 1029-1046 (2009).

Gutierrez, R. & Giner-Sorolla, R. Anger, disgust, and presumption of harm as reactions to taboo-breaking Behaviors. Emotion 7, 853-868 (2007).

Haidt, J. & Graham, J. When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals may not Recognize. Social Justice Research 20, 98-116 (2007). 

Haidt, J. & Hersh, M.A. Sexual morality: The cultures and emotions of conservatives and liberals. J Appl Soc Psychol 31, 191-221 (2001). 

Horvath, M.A.H. & Giner-Sorolla, R. Below the age of consent: Influences on moral and legal judgments of adult-adolescent sexual relationships. J Appl Soc Psychol 37, 2980-3009 (2007).

Kahan, D. Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study. CCP Working Paper No. 107 (2012).  

Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007). 

Kahan, D.M. The Progressive Appropriation of Disgust, in Critical America. (ed. S. Bandes) 63-79 (New York University Press, New York; 1999). 

Miller, W.I. The Anatomy of Disgust. (1997).

Nussbaum, M.C. Hiding from humanity: Disgust, Shame, and the Law. (Princeton University Press, Princeton, N.J.; 2004).

Robinson, R.J., Keltner, D., Ward, A. & Ross, L. Actual Versus Assumed Differences in Construal: "Naive Realism" in Intergroup Perception and Conflict. J. Personality & Soc. Psych. 68, 404-417 (1995).

Sherman, D.K., Nelson, L.D. & Ross, L.D. Naïve Realism and Affirmative Action: Adversaries are More Similar Than They Think. Basic & Applied Social Psychology 25, 275-289 (2003).


p.s. checkout the great bibliography of writings by the talented and prolific psychologist Yoel Inbar.



More on "cultural availability" & the Crickett... Ignored stories of "defensive use" (by children wielding the "Crickett" no less!)

I posted something a few days ago on the "cultural availability" effect & gun accidents involving children. The "effect" consists in the impact that cultural predispositions have in "selecting" for attention events or stories that gratify rather than disappoint one's cultural predispositions on risk.

On guns, then, the individuals predisposed to see guns as risky--egalitarian communitarians--or "ECs"-- for the most part--are much more likley to take note of, assign significance to, and recall instances in which guns result in a horrific accident involving a child -- like the recent, and genuinely horrific (also heart-breakingly sad) story of the 2 year-old girl shot by her 5 year-old brother with 5 yr-old's "Crickett," a miniaturized but full authentic and functional .22 marketed under the motto, "My first gun!"

Because such stories gratify the predisposition of ECs to see guns as dangerous, they fixate on such reports. Indeed, because commercial news providers anticipate the demand of ECs to be supplied with culturally gratifying proof that behavior they find disgusting (like the significance of "the first gun" ritual for people for whom the gun is rich with positive cultural meanings) causes harm, such stories become the occasion for a media feeding frenzy.

The disproportionate attention such incidents get relative to fatal accidents that do not gratify EC risk predispositions causes ECs to overestimate the risk of guns relative to other, less culturally evocative but more actuarially significant sources of risk to children -- like swimming pools.

BTW, I'm picking on ECs only because I'm talking about gun risks here; "cultural availability" applies just as much to individuals with hierarchical individualistic -- "HI" -- & other competing cultural predispositions, and is part of what drives cultural polarization over what scientific consensus is on issues like climate change and nuclear power as well as guns.

But in any case, the same dynamics also result in ECs ignoring stories that disappoint their expectations about the risks that guns propose.  As HIs emphasize, guns also sometimes are used defensively to ward off a violent attack, and in this sense can be expected to reduce the risk of violence to vulnerable people (children, but also women and minorities, who are disproportionately victimized). 

The actual prevalence of so-called "defensive use" of guns is (unsurprisingly) a matter that is subject to considerable debate, both among gun activists & among empirical researchers.

Nevertheless, there are lots of stories out there, in the media and in social media, that fit this account.  But ECs are (the cultural availability effect predicts) much less likely to take note of, assign significance to, and recall stories that support the conclusion that guns are sometimes used to protect life and thus likely systematically to underestimate defensive uses. They will then dismiss as specious the argument that there is this off-setting effect to take into consideration when addressing the impact of gun regulations. Of course, HIs can be expected to fixate on such stories -- with the help of an obliging media (like, say, Fox News or Fox network local affiliates) -- and thus overestimate both the frequency of defensive uses and the burden that gun regulations would place on use of guns for lawful self-defense.

Example ... This video of a news story reporting an 11 year-old girl's brave confrontation with household intruders whom she scared off with-- you guessed it -- a Crickett (or equivalent; it's not the only product of this sort).  One with a fetching pink-colored rifle stock designed to appeal to girls (or to HI parents of girls eager to fight "sexism" by making roles featuring honor norms available to their daughters as well as their sons).

Brave girl defense home against intruder with Crickett! (Don't worry: it's "soft fire," the mfr tells us in its own video, meaning minimal recoil, reducing risk of shoulder separation)

The story aired on a Fox affiliate local news progam, of course! (Check out the icon for jbranstter04, who uploaded it; what do you think his -- or her? -- cultural orientation might be?)

So, ECs are unlikely to see it. If they do, they will roll their eyes and dismiss it as absurd.

But if they can get to the end-- and can force themselves to pay attention! -- they'll find (I'm sure) the bit of information that they need, too, to reconcile what they've been forced to observe with what they already know is the truth about how the world works.

It turns out the "intruders" were people who knew the family -- and who broke in to steal their cache of guns. 

Seriously, one can't invent material this good.


Who is disgusted by kids' "toy" guns & drones, and why?

I was reflecting on the "disgust and revulsion" occassioned by "the Crickett"--a (slightly) minaturized but fully authentic, functional .22 rifle that is marketed for children ("my first rifle!"), one of which figured in the widely reported fatal shooting of a 2-year-old by her 5-year-old brother (the Crickett "owner") in Kentucky.

That got me to thinking about the links between cultural styles, the role of technological objects in expressing and propagating them, and the way in which emotions figure both in the value (or disvalue) we attach to such objects and the risks (or benefits) we see those objects as posing (or conferring)....

I thought maybe I'd write about this but I was not sure exactly how to put things or exactly what I think anyway. Actually, those problems rarely stop me, but still, I thought I'd try something else that might both communicate my apprehension of the phenomenon and motivate others to try to help make me sense of it.


I admit that I am disgusted by the Crickett (I admit, too, that I'm slightly concerned about why, and about the challenge this reaction creates for me in trying to see things in a fair and impartial light and to deal with others in a respectful and tolerant way).  

But the Bumblebee "first drone" strikes me (so to speak) as wondrous and beautiful--and a brilliant child's toy! Indeed, I'd very much like one myself.

One of the reasons I can't get one is that it doesn't exist--yet. But I'm sure someone-- someone else who followed this week's less widely heralded reporting on the progress of Harvard University's "Robobee" project-- is working on it. (You can get the Crickett, or at least could until a couple days ago; the "newsletter" for is real & was captured from the internet before the recent Kentucky shooting, after which the company shut down its internet site.)

At the same time, I know that the Bumblebee-- and the anticipated companion "first drones" that its manufacturer has in the works--will fill many with horror, revulsion, disgust. As a result, it will fill them with fear of all the harmsto public safety, to privacy, and to other goods--that private drones pose.

Is that part of what I like about the Bumblebee? I don't think so; I sure hope not, in fact.

But knowing they feel this way almost fills me with resolve to buy one for myself, and another two or three for holiday gifts and birthday presents for children whose families I know will want them to grow up sharing their fascination and wonder for science, technology, and human ingenuity . . . .

A while back, I posted a 2-part series "Who are these guys?," which responded to Jen Brisseli's request for a more vivid picture of the sorts of people who subscribe to the cultural styles defined by the "cultural cognition worldview" framework.  

This post is in the spirit of that, I think. Indeed, I think it is in the spirit of how Jen Brisseli wants to promote reflection on science generally with her "designing science" conception of science communication--this way of proceeding likely occurred to me b/c I have had the benefit of reflecting on what she is up to.

But now my question is this: who would be filled with appreciation and passion, and who with revulsion & disgust, by these "toys"?  And why?  Who are these guys?

In this regard -- and getting back to the form of inquiry and communication that I usually use to address such matters -- it's interesting to consider perceptions of technology risks.

In one CCP study of nanotechnology risk perceptions, we found that there was no cultural division over its risks and benefits generally. Not surprising, since 80% of the subjects had no idea what it was.

But when we exposed another group of subjects to a small amount of scientifically accurate, balanced information on nanotechnology risks and benefits, those individuals polarized along lines consistent with cultural predispositions associated with pro- and anti-technology outlooks.

The cultural group that credited the information about nanotechnology benefits and discounted the information about risks, moreover, was generally hierarchical and individualistic in orientation.  People with these outlooks are generally skeptical of environmental risks--ones relating to nuclear power and climate change, e.g.

But they also are the ones most predisposed to see gun risks as low--and see the risks associated with excessive control, including impairment of lawful self-defense, as high.  They believe, too, that empirical evidence compiled by scientists backs them up on this, and that their views on both climate change and nuclear power are also consistent with scientific consensus.

Egalitarian communiatarian subjects are generally very sensitive to technology risks -- they worry a lot about both climate change and nuclear power.  

They also are sickened by guns. They find them disgusting.  And consistent with cultural cognition they see guns as extremely risky, and gun control as extremely effective--and believe that empirical evidence compiled by scientists back them up on this, just as such evidence backs up their views about environmental and technological risks

I bet people who buy "the Crickett" for their young children are mainly hierarchical and individualist. Does that mean they would also like the Bumblebee?

Kids having a blast with their Cricketts!Would egalitarian communitarian, who I'm sure tend to be very disturbed by the Crickett, think the Bumblebee is also an abomination? And of course a tremendous risk to public safety and various other elements of well-being?

I sort of think that this conclusion isn't really right. That it's too simple....

"Group-grid," as my collaborators and I conceive of it at least, is a model.  All models are simplifications. Simplifying models are useful. But they also are necessarily false. 

If the insight that is enabled by simplifying complicated true things outweighs the distortion associated with what is necessarily false about simplifying them, then a model advances understanding.

But even a model that advances understanding in this way with respect to some issues or for a period of time can become one that doesn't advance understanding -- because what is false about it obscures insight into complicated things that are true -- with respect to some other set of issues, or with respect even to the same ones at a later time .... 

Anyway, I plan to keep my eye on drones.  I think they are or can be beautiful.  But I know that they also sicken and disgust others.  Who? and Why?



Who sees accidental shootings of children as evidence in support of gun control & why? The "cultural availability" effect

I don’t really like guns much.  I also hate to get wet, so rarely go swimming.

But what I do like to do -- because it is an instance of the sort of thing I study -- is think about why accidental shootings of young children (a) get so much media coverage relative to the other things that kill children; and (b) are—or, more likely, are thought—to be potent occasions for drawing public attention to the need for greater regulation of firearms.

Consider guns vs. (what else?!) swimming pools (if the comparison is trite, don’t blame me; blame the dynamics that make people keep resisting what the comparison illustrates about culture and cognition). 

  • Typically there are < 1,000 (more like 600-800) accidental gun homicides in US per yr. About 30 of those are children age 5 or under. 

 I think background checks of the sort “defeated” in US Senate (because passed by a majority that wasn’t big enough; I need a civics refresher course on how congress works...) would be a good idea.  I also would support ban on “assault rifles.”

But it’s obvious, to anyone who reflects on the matter if not to those who don't, that the incidence of the accidental shootings of children adds zero weight to the arguments that can be made in support of those policies.

Also obvious that neither of these policies—or any of the other even more ambitious ones that gun control advocates would like to enact (like bans on carrying of concealed weapons)-- would reduce the deaths of young kids by nearly as much as many many many other things. I’m not thinking of banning swimming pools, actually; but how about, say, ending the “war on drugs,” which indisputably fuel deadly forms of competition to reap the super-competitive profits that a black market affords?

The pool comparison, though, does show how the “culture war” over guns creates not only a very sad deformation of political discourse but also a weird selectively attention to empirical evidence, and a susceptibility to drawing unconvincing inferences from it.

Like I said, I like to think about these things.

One way to understand cultural cognition is that it shows how cultural values interact with more general psychological dynamics that shape perceptions of risk. 

One of these is the “availability effect,” which refers to the tendency of people to overestimate the incidence of risks involving highly salient or emotionally gripping events relative to less salient, less sensational ones.  We might explain why people seem so much more concerned about the risk of an accidental shooting of a child than the accidental drowning of one.

But the explanation is not satisfying because it begs the question of what accounts for the selective salience of various risks—what makes some but not others gripping enough to get our attention, or to get the attention of those who make a living showing us attention-grabbing things?  Cultural cognition theory says the cultural congeniality of seeing instances of harm that gratify one’s cultural predispositions. 

Moreover, because predispositions are heterogeneous, we should expect the “cultural availability effect” to generate systematic differences in perceptions of risk among people with different values.  In this case, it is the people whose values predispose them to feel “revulsion and disgust” (see the news story in my graphic) that have their attention drawn to accidental shootings of children and who treat them as evidence that the failure to enact background checks, assault rifle bans, etc., is increasing homicide.

On that note, a footnote from a paper that discusses this aspect of the theory of cultural cognition:

In one scene of Michael Moore’s movie Bowling for Columbine, the “documentary” team rushes to get footage from the scene of a reported accidental shooting only to discover when they arrive that television news crews are packing up their gear. “What’s going on? Did we miss it,” Moore asks, to which one of the departing TV reporters answers, “no, it was a false alarm—just a kid who drowned in a pool.” One would suspect Moore of trying to make a point—that the media’s responsiveness to the public obsession with gun accidents contributes to the public’s inattention to the greater risk for children posed by swimming pools—if the movie itself were not such an obvious example of exactly this puzzling, and self-reinforcing distortion. Apparently, it was just one of those rare moments when 1,000 monkeys mindlessly banging on typewriters (or editing film) surprise us with genuine literature.


Even *more* Q & A on "cultural cognition scales" -- measuring "latent dispositions" & the Dake alternative

Given how interesting the conversations were in the last two “Q&A” posts (here & here), I thought—heck, why not another. 

Here are a set of reflections in response to an email inquiry from a thoughtful person who wanted to understand what it means to treat the cultural worldview scales as “latent” measures of cultural dispositions, and why we—my collaborators & I in the Cultural Cognition Project—thought it necessary to come up with alternatives to the scales that Karl Dake initially formulated to test hypotheses relating to Douglas & Wildavsky’s “cultural theory of risk.” For elaboration, see Kahan, Dan M. "Cultural Cognition as a Conception of the Cultural Theory of Risk." Chap. 28 In Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, edited by R. Hillerbrand, P. Sandin, S. Roeser and M. Peterson. 725-60: Springer London, Limited, 2012.

Question: What do you mean when you say the "cultural cognition worldview scales" measure a "latent variable"? And that they "work better" than Dake's scales in this regard?

My answer:

(A) Let's hypothesize that there is inside each member of a group an unobserved & unobservable thing -- which we'll call that group's cultural predisposition -- that interacts with the mental faculties and processes by which that person processes information in a way that tends to bring his or her perceptions of risk into alignment with those of ever other member of the group. This would be an explanation (or part of one, at least) for "the science communication problem"-- the failure of valid, compelling, widely available scientific evidence to resolve political conflict over risks and other facts to which that evidence speaks.

(B) Although we can't observe cultural dispositions directly, we might still be able to make valid inferences about their existence & nature by identifying observable things that we would expect to correlate with them if the predispositions exist and if they have the nature that we might hypothesize they do. We had reason to believe that atoms existed long before they were "seen" under a scanning tunneling microscope because Einstein demonstrated that their existence would very precisely explain the observable (and until then very mysterious!) phenomenon of Brownian motion (in fact, we only "see" atoms with an ST microscope b/c we accept that the observable images they produce are best explained by atoms, which of course remain unobservable no matter what apparatus we use to "look" at them). Similarly, we might treat certain patterns of responses among a group's members as evidence that the predispositions exist and behave a certain way if such conclusions furnish a more likely explanation for those patterns than other potential causes and if we would not expect to see the patterns otherwise.  Within psychology, this is known as a "latent variable" measurement strategy, in which "manifest" or observable "indicators"--here the patterns of responses -- are used to measure a posited "latent" or unobserved variable --"cultural predispositions" in our case.

(C) That's what the items in our cultural worlscales are -- indicators of the latent cultural predispositions that we hypothesize explain the science communication problem. The scales reflect a theory that people would not be expected to respond to the statements the items comprise in patterns that sort individuals out along two continuous, cross-cutting dimensions unless people had "inside" of them group predispositions that correspond to "hierarchy individualism," "hierarchy communitarianism," "egalitarian individualism," and "egalitarian communitarianism."  On this view, responses are understood to be "caused" by the predispositions. The causal influence is only crudely understood and thus only imprecisely measured by each item; the whole point of having multiple ones is to aggregate responses to them, a process that will make the "noise" associated with their imprecision balance or cancel out & thus magnify the "signal" associated with them.  The resulting scales can be viewed as "measuring" the intensity of the unobserved predispositions.

(D) For this strategy for "observing" or "measuring" cultural predispositions to be valid, various things must be true.  The most basic one is that the items assigned to the scales must "perform" as the underlying theory posits.  The responses to them must correlate with each other in ways that generate the pattern one would expect if they are indeed "measuring" the cultural predispositions.  If the items correlate in some other pattern, the scales are not a 'valid" measure of the posited dispositions.  If they correlate in the expected pattern, but the correlations are very weak, then the scales can be viewed as "unreliable," which refers to the degree of precision by which an instrument measures whatever quantity it is supposed to be measuring (imagine that your bathroom scale had some sort of defect and as a result gave readings that erratically over- or underestimated people's weight; it wouldn't be very reliable in that case).

(E) The Dake scales did not perform well.   They were not reliable; they didn't correlate with *one another* as one would expect if the ones that were placed in the same scale were measuring the same thing. Moreover, to the extent that they seemed to measuring things "inside" people, those things did not fit expectations one would form about their relationship under the theory posited by the "cultural theory of risk." 

(F) Once one has valid & reliable scales, one does not yet have evidence that cultural predispositions explain the science communication problem.  Rather one has measures of what one is prepared to regard as cultural predispositions.  At that point, one must devise studies geared to generating correlations between the predispositions, as measured by the valid and reliable scales, and risk perceptions, as measured in some appropriate way.  Those correlations must be of a sort that one would expect to see if the predispositions are causing risk perceptions in the way one hypothesizes but would not expect to see otherwise. 



Deja voodoo: the puzzling reemergence of invalid neuroscience methods in the study of "Democrat" & "Republican Brains"

I promised to answer someone who asked me what I think of Schreiber, D., Fonzo, G., Simmons, A.N., Dawes, C.T., Flagan, T., Fowler, J.H. & Paulus, M.P. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans, PLoS ONE 8, e52970 (2013).

The paper reports the results of an fMRI—“functional magnetic resonance imagining”— study that the authors describe as showing that “liberals and conservatives use different regions of the brain when they think about risk.” 

They claim this finding is interesting, first, because, it “supports recent evidence that conservatives show greater sensitivity to threatening stimuli,” and, second, because it furnishes a predictive model of partisan self-identification that “significantly out-performs the longstanding parental model”—i.e., use of the partisan identification of individuals’ parents.

So what do I think?  Not much, frankly.

Actually, I think less than that: the paper supplies zero reason to adjust any view I have—or anyone else does, in my opinion—on any matter relating to individual differences in cognition & ideology.

To explain why, some background is necessary.

About 4 years ago the burgeoning field of neuroimaging experienced a major crisis. Put bluntly, scores of researchers employing fMRI for psychological research were using patently invalid methods—ones the defects in which had nothing to do with the technology of fMRIs but rather with really simple, basic errors relating to causal inference.

The difficulties were exposed—and shown to have been present in literally dozens of published studies—in two high profile papers: 

1.   Vul, E., Harris, C., Winkielman, P. & Pashler, H. Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition, Perspectives on Psychological Science 4, 274-290 (2009); and

2.   Kriegeskorte, N., Simmons, W.K., Bellgowan, P.S.F. & Baker, C.I. Circular analysis in systems neuroscience: the dangers of double dipping, Nature Neuroscience 12, 535-540 (2009).

The invalidity of the studies that used the offending procedures (ones identified by these authors through painstaking detective work, actually; the errors were hidden by the uninformative and opaque language then typically used to describe fMRI research methods) is at this point beyond any dispute.

Not all fMRI studies produced up to that time displayed these errors. For great ones, see any done (before and after the crisis) by Joshua Greene and his collaborators.

Today, moreover, authors of “neuroimaging” papers typically take pain to explain—very clearly—how the procedures they’ve used avoid the problems that were exposed by the Vul et al. and Kriegeskorte et al. critiques. 

And again, to be super clear about this: these problems are not intrinsicto the use of fMRI imaging as a technique for testing hypotheses about mechanisms of cognition. They are a consequence of basic mistakes about when valid inferences can be drawn from empirical observation.

So it’s really downright weird to see these flaws in a manifestly uncorrected form in Schreiber et al.

I’ll go through the problems that Vul et al. & Kriegeskorte et al. (Vul & Kriegeskorte team up here) describe, each of which is present in Schreiber et al.

1.  Opportunistic observation. In an fMRI, brain activation (in the form of blood flow) is measured within brain regions identified by little three dimensional cubes known as “voxels.” There are literally hundreds of thousandsof voxels in a fully imaged brain.

That means there are literally hundreds of thousands of potential “observations” in the brain of each study subject. Because there is constantly varying activation levels going on throughout the brain at all time, one can always find “statistically significant” correlations between stimuli and brain activation by chance. 

click me! I'm a smart fish!This was amusingly illustrated by one researcher who, using then-existing fMRI methodological protocols, found the region that a salmon cleverly uses for interpreting human emotions.  The salmon was dead. And the region it was using wasn’t even in its brain.

Accordingly, if one is going to use an fMRI to test hypotheses about the “region” of the brain involved in some cognitive function, one has to specifyin advance the “region of interest” (ROI) in the brain that is relevant to the study hypotheses. What’s more, one has to carefully constrain one’s collection of observations even from within that region—brain regions like the “amygdala” and “anterior cingulate cortex” themselves contain lots of voxels that will vary in activation level—and refrain from “fishing around” within ROIs for “significant effects.”

Schreiber et al. didn’t discipline their evidence-gathering in this way.

They did initially offer hypotheses based on four precisely defined brain ROIs in "the right amygdala, left insula, right entorhinal cortex, and anterior cingulate."

They picked these, they said, based on a 2011 paper (Kanai, R., Feilden, T., Firth, C. & Rees, G. Political Orientations Are Correlated with Brain Structure in Young Adults. Current Biology 21, 677-680 (2011)) that reported structural differences—ones, basically, in the size and shape, as opposed to activation—in theses regions of the brains of Republican and Democrats.

Schreiber et al. predicted that when Democrats and Republicans were exposed to risky stimuli, these regions of the brain would display varying functional levels of activation consistent with the inference that Repubicans respond with greater emotional resistance, Democrats with greater reflection. Such differences, moreover, could also then be used, Schreiber et al. wrote, to "dependably differentiate liberals and conservatives" with fMRI scans.

But contrary to their hypotheses, Schreiber et al. didn’t find any significant differences in the activation levels within the portions of either the amygdala or the anterior cingulate cortex singled out in the 2011 Kanai et al. paper. Nor did Schreiber et al. find any such differences in a host of other precisely defined areas (the "entorhinal cortex," "left insula," or "Right Entorhinal") that Kanai et al. identified as differeing structurally among Democrats and Republicans in ways that could suggest the hypothesized differences in cognition.

In response, Schreiber et al. simply widened the lens, as it were, of their observational camera to take in a wider expanse of the brain. “The analysis of the specific spheres [from Kanai et al.] did not appear statistically significant,” they explain,” so larger ROIs based on the anatomy were used next.”

Using this technique (which involves creating an “anatomical mask” of larger regions of the brain) to compensate for not finding significant results within more constrained ROI regions specified in advance amounts to a straightforward “fishing” expedition for “activated” voxels.

This is clearly, indisputably, undeniably not valid.  Commenting on the inappropriateness of this technique, one commentator recently wrote that “this sounds like a remedial lesson in basic statistics but unfortunately it seems to be regularly forgotten by researchers in the field.”

Even after resorting to this device, Schreiber et al. found “no significant differences . . .  in the anterior cingulate cortex,” but they did manage to find some "significant" differences among Democrats' and Republicans' brain activation levels in portions of the “right amygdala” and "insula."

2.  “Double dipping.”Compounding the error of opportunistic observation, fMRI researchers—prior to 2009 at least—routinely engaged in a practice known as “double dipping.” After searching for & zeroing in on a set of “activated” voxels, the researches would then use those voxels and only those to perform statistical tests reported in their analyses.

This is obviously, manifestly unsound.  It is akin to running an experiment, identifying the subjects who respond most intensely to the manipulation, and then reporting the effect of the manipulation only for them—ignoring subjects who didn’t respond or didn’t respond intensely. 

Obviously, this approach grossly overstates the observed effect.

Despite this being understood since at least 2009 as unacceptable (actually, I have no idea why something this patently invalid appeared okay to fMRI researchers before then), Schreiber et al. did it. The “[o]nly activations within the areas of interest”—i.e., the expanded brain regions selected precisely because  they contained voxel activations differing among Democrats and Republicans—that were “extracted and used for further analysis,” Schreiber et al. write, were the ones that “also satisfied the volume and voxel connection criteria” used to confirm the significance of those differences.

Vul called this technique “voodoo correlations” in a working paper version of his paper that got (deservedly) huge play in the press. He changed the title—but none of the analysis or conclusions in the final published version, which, as I said, now is understood to be 100% correct.

3.  Retrodictive “predictive” models. Another abuse of statistics—one that clearly results in invalid inferences—is to deliberately fit a regression model to voxels selected for observation because they display the hypothesized relationship to some stimulus and then describe the model as a “predictive” one without in fact validating the model by using it to predict results on a different set of observations.

Vul et al. furnish a really great hypothetical illustration of this point, in which a stock market analyst correlates changes in the daily reported morning temperature of a specified weather station with daily changes in value for all the stocks listed on the NYSE, identifies the set of stocks whose daily price changes are highly correlated with the station's daily temperature changes, and then sells this “predictive model” to investors. 

This is, of course, bogus: there will be some set of stocks from the vast number listed on the exchange that highly (and "significantly," of course) correlate with temperature changes through sheer chance. There’s no reason to expect the correlations to hold going forward—unless (at a minimum!) the analyst, after deriving the correlations in this completely ad hoc way, validates the model by showing that it continued to successfully predict stock performance thereafter.

Before 2009, many fMRI researchers engaged in analyses equivalent to what Vul describes. That is, they searched around within unconstrained regions of the brain for correlations with their outcome measures, formed tight “fitting” regressions to the observations, and then sold the results as proof of the mind-blowingly high “predictive” power of their models—without ever testing the models to see if they could in fact predict anything.

Schreiber et al. did this, too.  As explained, they selected observations of activating “voxels” in the amygdala of Republican subjects precisely because those voxels—as opposed to others that Schreiber et al. then ignored in “further analysis”—were “activating” in the manner that they were searching for in a large expanse of the brain.  They then reported the resulting high correlation between these observed voxel activations and Republican party self-identification as a test for “predicting” subjects’ party affiliations—one that “significantly out-performs the longstanding parental model, correctly predicting 82.9% of the observed choices of party.”

This is bogus.  Unless one “use[s] an independent dataset” to validate the predictive power of “the selected . . .voxels” detected in this way, Kriegeskorte et al. explain in their Nature Neuroscience paper, no valid inferences can be drawn. None.

BTW, this isn’ta simple “multiple comparisons problem,” as some fMRI researchers seem to think.  Pushing a button in one’s computer program to ramp up one’s “alpha” (the p-value threshold, essentially, used to avoid “type 1” errors) only means one has to search a bit harder; it still doesn’t make it any more valid to base inferences on “significant correlations” found only after deliberately searching for them within a collection of hundreds of thousands of observations.

The 2011 Kanai et al. structural imaging paper that Schreiber et al. claim to be furnishing “support” for didn’t make this elementary error. I’d say “to their credit,” except that such a comment would imply that researchers who use valid methods deserve “special” recognition. Of course, using valid methods isn’t something that makes a paper worthy of some special commendation—it’s normal, and indeed essential.

* * *

One more thing:

I did happen to notice that the Schreiber et al. paper seems pretty similar to a 2009 working paper they put out.  The only difference appears to be an increase in the sample size from 54 to 82 subjects. 

Also some differences in the reported findings: in their 2009 working paper, Schreiber et al. report greater “bilateralamygdala” activation in Republicans, not “right amygdala” only.  The 2011 Kanai paper that Schreiber et al. describe their study as “supporting,” which of course was published after Schreiber et al. collected the data reported in their 2009 working paper, found no significant anatomical differences in the “left amygdala” of Democrats and Republicans.

So, like I said, I really don’t think much of the paper.

What do others think?



Look, everybody: more Time-Sharing Experiments for the Social Sciences (TESS)!

Below a very welcome announcement from Jeremy Freese and Jamie Druckman -- & forwarded to me by Kevin Levay -- on the continued funding of TESS, which administers accepted study designs free of charge to a stratified on-line sample.

Actually, I'm going to do a post soon -- very soon! -- on use of on-line samples (& in particular on growing use of Mechanical Turk). Suffice it to say that if you can get a study conducted by TESS, you've got yourself an A1 sample -- for free!!

We are pleased to announce that Time-Sharing Experiments for the Social Sciences (TESS) was renewed for another round of funding by NSF starting last Fall. TESS allows researchers to submit proposals for experiments to be conducted on a nationally-representative, probability-based Internet platform, and successful proposals are fielded at no cost to investigators.  More information about how TESS works and how to submit proposals is available at 

Additionally, we are pleased to announce the development of two new proposal mechanisms. TESS’s Short Studies Program (SSP) is accepting proposals for fielding very brief population-based survey experiments on a general population of at least 2000 adults. SSP recruits participants from within the U.S. using the same Internet-based platform as other TESS studies. More information about SSP and proposal requirements is available at 

TESS’s Special Competition for Young Investigators is accepting proposals from June 15th-September 15th. The competition is meant to enable younger scholars to field large-scale studies and is limited to graduate students and individuals who are no more than 3 years post-Ph.D. More information about the Special Competition and proposal requirements is available at 

For the current grant, the principal investigators of TESS are Jeremy Freese and James Druckman of Northwestern University, who are assisted by a new team of over 65 Associate PIs and peer reviewers across the social sciences. More information about our APIs is available at

James Druckman and Jeremy Freese

Principal Investigators, TESS


"Yes we can--with more technology!" A more hopeful narrative on climate?

Andy Revkin (the Haile Gebrselassie of environmental science journalism) has posted a guest-post on his blog by Peter B. Kelemen, the Arthur D. Storke Professor and vice chair in the Department of Earth and Environmental Sciences at Columbia University.

The essay combines two themes, basically.

One is the "greatest-thing-to-fear-is-fear-itself" claim: apocalyptic warnings are paralyzing and hence counterproductive; what's needed to motivate people is "hope."

That point isn't developed that much in the essay but is a familiar one in risk communication literature -- and is often part of the goldilocks dialectic that prescribes "use of emotionally compelling images" but "avoidance of excessive reliance on emotional images" (I've railed against goldilocks many times; it is a pseudoscience story-telling alternative to the real science of science communication).

But the other theme, which is the predominant focus and which strikes me as really engaging and intriguing, is that in fact "apocalypse" is exceedingly unlikely given the technological resourcefulness of human beings.

We should try to figure out the impact of human behavior that generates adverse climate impacts and modify them with feasible technological alternatives that themselves avoid economic and like hardships, Kelemen argues. Plus, to the extent that we decide to continue in engaging in behavior that has adverse impacts, we should anticipate that we will also figure out technological means of offsetting or dealing with the impacts. 

Kelemen focuses on carbon capture, gas-fired power plants, etc.

The policy/science issues here are interesting and certainly bear discussion.

But what captures my interest, of course, is the "science communication" significance of the "yes we can--with more technology" theme.  Here are a couple of points about it:

1. This theme is indeed likely to be effective in promoting constructive engagement with the best evidence on climate change.  The reason isn't that it is "hopeful" per se but that it avoids antagonistic meanings that trigger reflexive closed-mindedness on the part of individuals--a large segment of the population, in fact-- who attach high cultural value to human beings' technological resourcefulness and resilience.

from Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).CCP has done two studies on how making technological responses to climate change --such as greater reliance on nuclear power and exploration of geoengineering -- more salient helps to neutralize dismissive engagement with and thus reduce polarization over climate science.

These studies, by the way, are not about how to make people believe particular propositions or support particular policies (I don't regard that as "science communication" at all, frankly).  The outcome measures involve how reflectively and open-mindedly subjects assess scientific evidence.

2. Nevertheless, the "yes we can--with technology" theme is also likely to generate a push-back effect. The fact is that "apocalyptic" messaging doesn't breed either skepticism or disengagement with that segment of the population that holds egalitarian and communitarian values. On the contrary, it engages and stimulates them, precisely because (as Douglas & Wildavsky argue) it is suffused with cultural meanings that fit the moral resentment of markets, commerce, and industry.

For exactly this reason, individuals with these cultural dispositions predictably experience a certain measure of dissonance when technological "fixes" for climate impacts are proposed: "yes we can--with technology" implies that the solution to the harms associated with too much commerce, too great a commitment to markets, too much industrialization etc is not "game over" but rather "more of the same."  

Geoengineering and the like are "liposuction" when what we need is to go on a "diet."

How do these dynamics play out?

Well, of course, the answer is, I'm not really sure. 

But my conjecture is that the positive contribution of the "yes we can --with technology" narrative can make to promoting engagement with climate science will offset any push back effect. Most egalitarian communitarians are already plenty engaged with the issue of climate and are unlikely to "tune out" if technological responses other than carbon limits become an important part of the conversation.  There will be many commentators who conspicuously flail against this narrative, but their reactions are not a good indicator of how the "egalitarian communitarian" rank and file are likely to react. Indeed, pushing back too hard, in a breathless, panicked way will likely make such commentators appear weirdly zealous and thus undermine their credibility with the largely nonpartisan mass of citizens who are culturally disposed to take climate change seriously.

Or maybe not. As I said, this is a conjecture, a hypothesis.  The right way to figure the question out isn't to tell stories but rather to collect evidence that can help furnish an answer.


How many times do I have to explain?! "Facts" aren't enough, but that doesn't mean anyone is "lying"!

Receiving email like this is always extremely gratifying, of course, because it confirms for me that our "cultural cognition" research is indeed connecting with a large number of culturally diverse people. At the same time, it is frustrating to see how these readers fundamentally misunderstand our studies. I guess when you are so deeply caught up in a culturally contested question like this one, it is just really hard to get that screaming "the facts! the facts! Stop lying!!!," isn't going to promote constructive public engagement with the best available scientific evidence.