follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Motivated reasoning & its cognates

The following is an excerpt from Kahan, D.M. Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1-77 (2011). I thought it might be useful to reproduce it here, both for its own sake and for reference (via hyperlink) in future blog entries, since  many of the concepts it describes are recurring ones in my posts. This entry contains a modest number of hyperlinks; the printed version (accessible via SSRN), is amply footnoted! 

1.  Generally. Motivated reasoning refers to the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.  They Saw a Game, a classic psychology article from the 1950s, illustrates the dynamic.  Experimental subjects, students from two Ivy League colleges, were instructed to watch a film that featured a set of controversial officiating calls made during a football game between teams from their respective schools.  What best predicted the students’ agreement or disagreement with a disputed call, the researchers found, was whether it favored or disfavored their schools’ team.  The researchers attributed this result to motivated reasoning: the students’ emotional stake in affirming their commitments to their respective institutions shaped what they saw on the tape.

The end or goal motivates cognition in the sense that it directs mental operations — in this case, sensory perceptions; in others, assessments of the weight and credibility of empirical evidence, or performance of mathematical or logical computation — that we expect to function independently of that goal or end.  Indeed, the normal connotation of “motive” as a conscious goal or reason for acting is actually out of place here.  The students wanted to experience solidarity with their institutions, but they didn’t treat that as a conscious reason for seeing what they saw.  They had no idea (or so we are to believe; one needs a good experimental design to be sure this is so) that their perceptions were being bent in this way.

Although the students in this study probably would not have been distressed to learn that their perceptions had been covertly recruited by their desire to experience solidarity, there can be other contexts in which motivated cognition subverts an actor’s conscious ends.  This might be so, for example, when a person who genuinely desires to be make a fair or accurate judgment is unwittingly impelled to make a determination that favors some personal interest, pecuniary or social.

2.  Identity-Protective Cognition. The goals or needs that can motivate cognition are diverse.  They include fairly straightforward things, like a person’s financial or related interests.  But they reach more intangible stakes, too, such as one’s need to sustain a positive self-image or the desire to promote states of affairs or other goods that reflect one’s moral values.

Affirming one’s membership in an important reference group — the unconscious influence that operated on the students in the They Saw A Game experiment — can encompass all of these ends simultaneously.  Individuals depend on select others — from families to university faculties, from religious denominations to political parties — for all manner of material and emotional support.  Propositions that impugn the character or competence of such groups, or that contradict the groups’ shared commitments, can thus jeopardize their individual members’ well-being.  Assenting to such a proposition him- or herself can sever an individual’s bonds with such a group.  The prospect that people outside the group might credit this proposition can also harm an individual by reducing the social standing or the self-esteem that person enjoys by virtue of his or her group’s reputation.  Individuals thus face psychic pressure to resist propositions of that sort, generating a species of motivated reasoning known as identity-protective cognition.

Identity-protective cognition, like other forms of motivated reasoning, operates through a variety of discrete psychological mechanisms.  Individuals are more likely to seek out information that supports than information that challenges positions associated with their group identity (biased search).  They are also likely selectively to credit or dismiss a form of evidence or argument based on its congeniality to their identity (biased assimilation).  They will tend to impute greater knowledge and trustworthiness and hence assign more credibility to individuals from within their group than from without.

These processes might take the form of rapid, heuristic-driven, even visceral judgments or perceptions, but they can influence more deliberate and reflective forms of judgment as well.  Indeed, far from being immune from identity-protective cognition, individuals who display a greater disposition to use reflective and deliberative (so-called “System 2”) forms of reasoning rather than intuitive, affective ones (“System 1”) can be expected to be even more adept at using technical information and complex analysis to bolster group-congenial beliefs.

3.  Naïve Realism. Identity-protective cognition predictably impedes deliberations, negotiations, and like forms of collective decisionmaking.  When collective decisionmaking turns on facts or other propositions that are understood to bear special significance for the interests, standing, or commitments of opposing groups (for example, those who identify with the respective sides in the Israel-Palestine conflict), identity-protective cognition will predictably exaggerate differences in their understandings of the evidence.  But even more importantly, as a result of a dynamic known as “naïve realism,” each side’s susceptibility to motivated reasoning will interact with and reinforce the other’s.

Naïve realism refers to an asymmetry in the ability of individuals to perceive the impact of identity-protective cognition.  Individuals tend to attribute the beliefs of those who disagree with them to the biasing impact of their opponents’ values.  Often they are right.  In this respect, then, people are psychological “realists.”  Nevertheless, in such situations individuals usually understand their own factual beliefs to reflect nothing more than “objective fact,” plain for anyone to see.  In this regard, they are psychologically naïve about the contribution that group commitments make to their own perceptions.

Naïve realism makes exchanges between groups experiencing identity-protective cognition even more divisive.  The (accurate) perception that a rival group’s members are reacting in a closed-minded fashion naturally spurs a group’s members to express resentment — the seeming baselessness of which provokes members of the former to experience and express the same.  The intensity, and the evident polarization, of the disagreement magnifies the stake that individuals feel in defending their respective groups’ positions.  Indeed, at that point, the debate is likely to take on meaning as a contest over the integrity and intelligence of those groups, fueling the participants’ incentives, conscious and unconscious, to deny the merits of any evidence that undercuts their respective views.

4.  “Objectivity.” As naïve realism presupposes, motivated reasoning is an instance of what we commonly recognize as rationalization.  We exhort others, and even ourselves, to overcome such lapses — to adopt an appropriate stance of detachment — in settings in which we believe impartial judgment is important, including deliberations or negotiations in which vulnerability to self-serving appraisals can interfere with reaching consensus.  What most people don’t know, however, is that such admonitions can actually have a perverse effect because of their interaction with identity-protective cognition.

This is the conclusion of studies that examine whether motivated reasoning can be counteracted by urging individuals to be “objective,” “unbiased,” “rational,” “open-minded,” and the like.  Such studies find that individuals who’ve been issued this type of directive exhibit greater resistance to information that challenges a belief predominant within their defining groups.  The reason is that objectivity injunctions accentuate identity threat.  Individuals naturally assume that beliefs they share with others in their defining group are “objective.”  Accordingly, those are the beliefs they are most likely to see as correct when prompted to be “rational” and “open-minded.”  Indeed, for them to change their minds in such a circumstance would require them to discern irrationality or bias within their group, an inference fraught with dissonance.

For the same reason, emphasizing the importance of engaging the issues “objectively” can magnify naïve realism.  As they grow even more adamant about the correctness of their own group’s perspective, individuals directed to carefully attend to their own impartiality become increasingly convinced that only unreasoning, blind partisanship can explain the intransigence of the opposing group.  This view triggers the reciprocal and self-reinforcing forms of recrimination and retrenchment that are the signature of naïve realism.

5.  Cultural Cognition. Disputes set in motion by identity-protective cognition and fueled by naïve realism occupy a prominent place in our political life.  Such conflicts are the focus of the study of cultural cognition.

Cultural cognition refers to the tendency of individuals to conform their perceptions of risk and other policy-consequential facts to their cultural worldviews.  Cultural worldviews consist of systematic clusters of values relating to how society should be organized.  Arrayed along two cross-cutting dimensions — hierarchy/egalitarianism and individualism/communitarianism — these values supply the bonds of affinity groups, membership in which motivates identity-protective cognition.  People who subscribe to a relatively hierarchical and individualistic worldview, for example, tend to be dismissive of environmental risk claims, acceptance of which would justify restrictions on commerce and industry, activities they value on material and symbolic grounds.  Individuals who hold egalitarian and communitarian values, in contrast, are morally suspicious of commerce and industry, which they see as sources of social disparity and objects of noxious self-seeking.  They therefore find it congenial to believe that commerce and industry pose harms worthy of constraining regulations.  Experimental work has documented the contribution of cultural-cognition worldviews to various discrete mechanisms of motivated cognition, including biased search and assimilation, perceptions of expertise and credibility, and brute sense impressions.

Methods of cultural cognition have also been used to measure controversy over legally consequential facts.  Thus, mock jury studies have linked identity-protective cognition, motivated by the cultural worldviews, to conflicting perceptions of the risk posed by a motorist fleeing the police in a high-speed chase; of the consent of a date rape victim who said “no” but did not physically resist her assailant; of the volition of battered women who kill in self-defense; and of the use of intimidation by political protestors.  To date, however, no studies have directly tested the impact of cultural cognition on judges.

6.  Cognitive Illiberalism. Finally, cognitive illiberalism refers to the distinctive threat that cultural cognition poses to ideals of cultural pluralism and individual self-determination.  Americans are indeed fighting a “culture war,” but one over facts, not values.

The United States has a genuinely liberal civic and political culture — born not of reflective commitment to cosmopolitan ideals but of bourgeois docility.  Media spectacles notwithstanding, citizens generally don’t have an appetite to impose their worldviews on one another; they have an appetite for SUVs, big houses, and vacations to Disneyland (or Las Vegas).  Manifested in the absence of the sectarian violence that has filled human history and still rages outside the democratic capitalist world, there is effective consensus that the state should refrain from imposing a moral orthodoxy and confine policymaking to attainment of secular goods — safety, health, security, and prosperity — of value to all citizens regardless of their cultural persuasion.

As much as they agree about the ends of law, however, citizens are conspicuously — even spectacularly — factionalized over the means of attaining them.  Is the climate heating up as a result of human activity, and if so will it pose any dangers to us?  Will permitting citizens to carry concealed handguns in public increase violent crime — or reduce it?  Would a program of mandatory vaccination of schoolgirls against HPV promote their health by protecting them from cervical cancer — or undermine it by lulling them into unprotected sex, increasing their risk of contracting HIV?  Answers to questions like these tend to sharply polarize people of opposing cultural outlooks.

Divisions along these lines are not due to chance, of course; they are a consequence of identity-protective cognition.  The varying emotional resonance of risk claims across distinct cultural communities predisposes their members to find some of these claims more plausible than others, a process reinforced by the tendency of individuals to seek out and credit information from those who share their values.

Far from counteracting this effect, deliberation among diverse groups is likely to accentuate polarization.  By revealing the correlation between one or another position and one or another cultural style, public debate intensifies identity-protective pressure on individuals to conform to the views dominant within their group.

Liberal discourse norms constrain open appeals to sectarian values in debates over the content of law and policy.  But our political culture lacks any similar set of conventions for constraining the tendency of policy debates to build into rivalries among the members of groups whose members subscribe to competing visions of the best life.  On the contrary, one of the central discourse norms employed to steer law and policymaking away from illiberal conflicts of value plays a vital role in converting secular policy debates into forms of symbolic status competition.

The injunction of liberal public reason makes empirical, welfarist arguments the preferred currency of argumentative exchange.  The expectation that participants in public deliberations will use empirical arguments tends to confine their advocacy to secular ends; it also furnishes observable proof to the advocate and her audience that her position is not founded on an ambition to use the law to impose her own partisan view of the good.

Psychologically, however, the injunction to present culturally neutral empirical grounds for one’s position has the same effect as an “objectivity” admonition.  The prospect that one’s empirical arguments will be shown to be false creates the identity-threatening risk for her that she or others will come to form the belief that her group is deluded and, in fact, committed to propositions inimical to the public welfare.  In addition, the certitude that empirical arguments convey — “it’s simply a fact that . . . ”; “how can they deny the scientific evidence on . . . ?” — arouses suspicions of bad faith or blind partisanship on the part of the groups advancing them.  Yet when members of opposing groups attempt to rebut such arguments, they are likely to respond with the same certitude, and with the same lack of awareness that they are being impelled to credit empirical arguments to protect their identities.  This form of exchange — the signature of naïve realism — predictably generates cycles of recrimination and resentment.

When policy debates take this turn, both sides know that the answers to the questions they are debating convey cultural meanings.  The positions that individuals take on whether the death penalty deters, whether deep geologic isolation of nuclear wastes is safe, whether immigration reform will boost the economy or put people out of work, and the like express their defining commitments and not just their beliefs about how the world works.  Whose answer the state credits — by adopting one or another policy — elevates one cultural group and degrades the other.  Very few citizens are moral zealots.  But to protect the status of their group and their own standing within it, moderate citizens are conscripted, against their conscious will, into a divisive struggle to control the expressive capital of law.


Bolsen, Druckman & Cook working paper addresses critical issue in Science of #Scicom: What triggers public conflict over policy-relevant science?

Here's something people interested in the science of science communication should check out:

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series, WP-13-11 (May 1, 2013). 

The paper presents an interesting study on how exposure to information on the existence of political conflict affects public attitudes toward policy-relevant science, including the interaction of such exposure to information on "scientific conensus."
I think this is exactly the sort of research that's needed to address the "science communication problem." That's the term I use to refer to the failure of valid and widely accessible science to quiet public controversy over policy-relevant facts (including risks) to which that evidence directly speaks.
Most of the research in this area examines how to dispel such conflict.  Likely this is a consequence of the salience of the climate change controversy and the impact it has had in focusing attention on the "science communication problem" and the need to integrate science-informed policymaking with the science of science communication. 

But as I've emphasized before, the focus on resolving such conflict risks diverting attention from what I'd say is the even more important question of how the "science communication problem" takes root. 

The number of issues that display the science communication problem's signature form of cultural (or political) polarization is very small relative to the number of issues that could. Something explains which issues end up afflicted with this pernicious pathology and which don't. 

If we can figure out what triggers the problem, then we can examine how to avoid it. That's a smart thing to do, becaues it might well be easier to avoid cultural polarization than to vanquish it once it sets in. 

For an illustration, consider the HPV vaccine.  As I've explained previously, the conditions that triggered the science communication problem there could easily have been anticipated and avoided. The disaster that occurred in the introduction of the vaccine stunningly illustrate the cost of failing systematically acquire and use the insight that the science of science communication can afford. 

The BDC paper is thus really heartening, because it focuses exactly on the "anticipation/avoidance" objective. It's the sort of research that we need to devise an effective science communication environment protection policy

I'll say more about the substance of the studey on another occasion, likely in connection with a recap of my Science of Science Communication course's sessions on emerging technology (which featured another excellent Druckman/Bolsen study). 

But if others want to say what they think of the study -- have at it!


Is disgust "conservative"? Not in a Liberal society (or likely anywhere else)

This is a popular theme.

It is associated most prominently with the very interesting work of Jonathan Haidt, who concludes that "disgust" is characteristic of a "conservative" psychological outlook that morally evaluates behavior as intrinsically appropriate or inappropriate as opposed to a liberal one that focuses on "harm" to others.

Martha Nussbaum offers a similar, and similarly interesting account, portraying "disgust" as a sensibility that ranks people (or ways of living associated with them) in a manner that is intrinsically hierarchical.  Disgust has no role to play in the moral life of a modern democratic citizen, she concludes. 

But I can't help but thinking that things are slightly more complicated -- and as a result, possibly much more interesting! -- than this.

Of course, I'm thinking about this issue because I'm at least momentarily obsessed with the role that disgust is playing in public reactions to the death of a 2-year-old girl in Kentucky, who was shot by her 5-year-old brother who was "playing" with his "Crickett," a miniaturized but authentic and fully operational .22 caliber rifle marketed under the slogan "my first gun!"

The Crickett disgusts people. Or so they say-- over & over. And I believe them. I believe not only that they are experiencing a "negative affective reaction" but that what they are feeling is disgust.  Because I am experiencing that feeling, too, and the sensibility really does bear the signature elements of disgust.

I am sickened by the images featured in the manufacturer's advertising: the beaming, gap-toothed boy discovering a Crickett when he tears open a gift-wrapped box (likely it is his birthday; "the first gun" ritual is the "bar mitzvah of the rural Southern WASP," although he is at least 3 yrs south of 13); the determined elementary school girl taking aim with the model that has the pink faux-wood stock; the envious neighbor boy ("I wish I had one!"), whose reaction is geared to fill parents with shame for putting their son at risk of being treated as an outcast (yes, their son; go ahead & buy your tomboy the pink-stock Crickett, but if she prefers, say, to make drawings or to read about history, surely she won't be mocked and derided).

These images frighten me. They make me mad.  And they also truly—literally—turn my stomach.

I want to bury the Crickett, to burn it, destroy it. I want it out of my sight, out of anyone's, because I know that it--and what it represents--can contaminate the character, corrupt it.

I'm no "conservative" and neither is anyone else whom I observe (they are all over the place) expressing disgust toward the Crickett.

But of course, this doesn’t mean "liberals" (am I one? I suppose, though what passes for “liberal” in contemporary political discourse & a lot of scholarly discourse too is so philosophically thin and so historically disconnected that it demeans a real Liberal to see the inspired moral outlook he or she has inherited made to bear the same label. More on that presently) have forgotten the harm principle.

The harm guns cause to others-- just look at the dead 2 yr old girl in Kentucky, for crying out loud!--not the "disgust" they feel toward them is the reason they want to ban—restrict them!

Yes, and it's why they have historically advocated strict regulation (outright banning, if possible) of swimming pools, which are orders of magnitude more lethal for children . . . .

And why President Obama is trying so hard to get legislation passed that would get America out of the "war on drugs," the collateral damage of which includes many, many times more kids gunned down in public than died in Newtown. . . .

Look:  “liberals” want to enact background checks, ban assault rifles, prohibit carrying concealed handguns because they truly, honestly believe that these measures will reduce harm.

But they truly, honestly believe these things--despite the abundant evidence that such measures will have no meaningful impact on homicide, and are certain to do less than many many other things they ignore -- because they are disgusted by guns. 

We impute harm to what disgusts us; and we are disgusted by behavior that violates the moral norms that we hold in common with others and that define our understanding of the best way to live.

The "we" here, moreover, is not confined to "liberals."  

"Conservatives" are in the same motivated-reasoning boat. They are "disgusted" by all kinds of things--drugs, homosexuality, rap music (maybe even drones!).  But they say we should "ban"/"control" etc. such things because of the harms they cause.  

It's not characteristic of ordinary people who call themselves "conservatives"  that they see violation of "sacred" norms as a ground for punishing people independently of harm. Rather it's characteristic of them to see harm in what disgusts them. Just as "liberals" do! 

The difference between "liberals" and "conservatives" is in what they find disgusting, and hence what they see as harmful and thus worthy of legal restriction.

Or at least that is what many thoughtful scholars -- like Mary Douglas, William Miller, Roger Giner-Sorrolla, among others.

Our study of cultural cognition is, of course, inspired by this basic account, and although we haven't (so far) attempted to include observation and measurement of disgust or other identifiable moral sensibilities in our studies, I think our results are more in keeping with this position than with any that sees "conservativism" as uniquely bound up with "disgust" -- or with any that tries to explain the difference in the perceptions of risk of ordinary people with reference to moral styles that consciously place varying degrees of importance on "harm."

I wouldn't say, of course, that the Haidt-Nussbaum position (let's call it) has been "disproven" etc.  This work is formidable, to say the least! Whether there are differences in the cognitive and emotional processes of "liberals" and "conservatives" (as opposed to differences in the norms that orient those processes) is an important, difficult question that merits continued thoughtful investigation.

Still, it is interesting to reflect on why accounts that treat "liberals" as concerned with "harm" and "conservatives," alone, as concerned with or motivated by "disgust" are as popular as they are—not among psychologists or others who are able and who have made the effort to understand the nature of the evidence here but among popular consumers of such work who take the “take away” of it uncritically, without reflection on the strength of the evidence or cogency of the inferences to be drawn from it (this is sad; it is a reflection of a deficit in ordinary science intelligence).

Here's a conjecture: because we are all Liberals.  

I’m not using the term “Liberal” in this sense to refer to points to the left of center on the 1-dimensional right-left spectrum that contemporary political scientists and psychologists use to characterize popular policy preferences.

The Liberalism I have in mind refers to a distinctive understanding of relationship between the individual and the state. What’s distinctive about it, in fact, is that individuals comes first. The apparatus of the state exists to secure the greatest degree of equal liberty for individuals, who aside from their obligation to abide by laws that serve that end must be respected as free to pursue happiness on terms of their own choosing.

The great mass of ordinary people who call themselves “conservatives” in the US (and in Australia, in the UK, in France, Germany, Canada . . .) are as committed to Liberalism in this sense as are those call themselves “liberals” (although in fact, the great mass of people either don’t call themselves “conservative” or “liberal” or, if they do, don’t really have any particular coherent idea of what doing so entails). They are so perfectly and completely committed to Liberalism that they can barely really conceive of what it would look like to live in a political regime with a different animating principle.

The currency of disgust is officially valueless in the Liberal state’s economy of political justification. Under the constitution of the Liberal State, the offense one group of citizens experience in observing or knowing that another finds satisfaction in a way of life the first finds repulsive is not a cognizable harm.

We all know this—better, just are this, whether or not we “know” it; it’s in the nature of a political regime to make its animating principle felt even more than “understood.” And we all honestly believe that we are abiding by this fundamental principle when we demand that behavior that truly disgusts us—the practice of same-sex or polygamous marriage, the consumption of drugs, the furnishing of a child with a “Crickett,” and the like—be prohibited not because we find it revolting but because it is causing harm.

As a result, the idea that we are unconsciously imputing “harm” selectively to what disgusts us (or otherwise offends sensibilities rooted not in our commitment to avoiding harm to others but in our commitment to more culturally partisan goods) is unsettling, and like many unsettling things a matter we tend to discount.

At the same time, the remarkable, and everywhere perfectly obvious congruence of the disgust sensibilities and perceptions of harm formed by those who hold cultural and political commitments different from our own naturally suggests to us that those others are either attempting to deceive us or are in fact deceiving themselves via a process of unconscious rationalization.

This is in fact a process well known to social psychology, which calls it “naïve realism.”  People are good at recognizing the tendency of those who disagree with them to fit their perceptions of risk and other facts related to contested policy issues to their values and group commitments. Ordinary people are realists in this sense. At the same time, they don’t readily perceive their own vulnerability to the very same phenomenon. This is the naïve part!

Here, then, people with “liberal” political outlooks can be expected to credit work that tells them that “conservatives” are uniquely illLiberal—that “conservatives,” as opposed to “liberals,” are consciously or unconsciously evaluating behavior with a morality that is guided by disgust rather than harm.

All of this is separate, of course, from whether the work in question is valid or not. My point is simply that we can expect findings of that sort to be accepted uncritically by those whose cultural and political predispositions it gratifies.

Would this be so surprising?  The work in question, after all, is itself applying the theory of “motivated cognition,” which predicts this sort of ideologically selective assessment of the strength of empirical evidence.

Still, that motivated reasoning would generate, on the part of the public, an ideological slant in the disposition to credit evidence that ilLiberal sensibilities disproportionately guide the moral judgments of those whose ideology one finds abhorent (disgusting, even) is, as I indicated, only a conjecture. 

In fact, I view the experiment that I performed on cognitive reflection, ideology and motivated reasoning as effectively modeling this sort of process. 

But like all matters that admit of empirical assessment, the proposition that ideologically motivated reasoning will create support for the proposition that aspects of it—including the cognitive force of “disgust” in orientating perceptions of harm—is ideologically or culturally asymmetric is not something that can be conclusively established by a single empirical study—indeed, is not something that can ever be “conclusively” settled but rather a matter on which beliefs must always be regarded as provisional and revisable in light of whatever the evidence might show.

In the meantime, we can enjoy the excellent work of scholars like Haidt and Nussbaum, and the competing positions of theorists and empiricists like Miller, Douglas, and Giner-Sorrolla, as compensation for having to enduring the depressing spectacle of cultural polarization over matters like guns, climate change, nuclear power, the HPV vaccine, drugs, unorthodox sex practices. . . etc. etc.

(Some) references:

Douglas, M. Purity and danger; an analysis of concepts of pollution and taboo. (Praeger, New York,; 1966).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing Under Defense Motivation. Pers Soc Psychol B 23, 84-97 (1997).

Giner-Sorolla, R., Chaiken, S. & Lutz, S. Validity beliefs and ideology can influence legal case judgments differently. Law Human Behav 26, 507-526 (2002).

Graham, J., Haidt, J. & Nosek, B.A. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96, 1029-1046 (2009).

Gutierrez, R. & Giner-Sorolla, R. Anger, disgust, and presumption of harm as reactions to taboo-breaking Behaviors. Emotion 7, 853-868 (2007).

Haidt, J. & Graham, J. When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals may not Recognize. Social Justice Research 20, 98-116 (2007). 

Haidt, J. & Hersh, M.A. Sexual morality: The cultures and emotions of conservatives and liberals. J Appl Soc Psychol 31, 191-221 (2001). 

Horvath, M.A.H. & Giner-Sorolla, R. Below the age of consent: Influences on moral and legal judgments of adult-adolescent sexual relationships. J Appl Soc Psychol 37, 2980-3009 (2007).

Kahan, D. Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study. CCP Working Paper No. 107 (2012).  

Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007). 

Kahan, D.M. The Progressive Appropriation of Disgust, in Critical America. (ed. S. Bandes) 63-79 (New York University Press, New York; 1999). 

Miller, W.I. The Anatomy of Disgust. (1997).

Nussbaum, M.C. Hiding from humanity: Disgust, Shame, and the Law. (Princeton University Press, Princeton, N.J.; 2004).

Robinson, R.J., Keltner, D., Ward, A. & Ross, L. Actual Versus Assumed Differences in Construal: "Naive Realism" in Intergroup Perception and Conflict. J. Personality & Soc. Psych. 68, 404-417 (1995).

Sherman, D.K., Nelson, L.D. & Ross, L.D. Naïve Realism and Affirmative Action: Adversaries are More Similar Than They Think. Basic & Applied Social Psychology 25, 275-289 (2003).


p.s. checkout the great bibliography of writings by the talented and prolific psychologist Yoel Inbar.



More on "cultural availability" & the Crickett... Ignored stories of "defensive use" (by children wielding the "Crickett" no less!)

I posted something a few days ago on the "cultural availability" effect & gun accidents involving children. The "effect" consists in the impact that cultural predispositions have in "selecting" for attention events or stories that gratify rather than disappoint one's cultural predispositions on risk.

On guns, then, the individuals predisposed to see guns as risky--egalitarian communitarians--or "ECs"-- for the most part--are much more likley to take note of, assign significance to, and recall instances in which guns result in a horrific accident involving a child -- like the recent, and genuinely horrific (also heart-breakingly sad) story of the 2 year-old girl shot by her 5 year-old brother with 5 yr-old's "Crickett," a miniaturized but full authentic and functional .22 marketed under the motto, "My first gun!"

Because such stories gratify the predisposition of ECs to see guns as dangerous, they fixate on such reports. Indeed, because commercial news providers anticipate the demand of ECs to be supplied with culturally gratifying proof that behavior they find disgusting (like the significance of "the first gun" ritual for people for whom the gun is rich with positive cultural meanings) causes harm, such stories become the occasion for a media feeding frenzy.

The disproportionate attention such incidents get relative to fatal accidents that do not gratify EC risk predispositions causes ECs to overestimate the risk of guns relative to other, less culturally evocative but more actuarially significant sources of risk to children -- like swimming pools.

BTW, I'm picking on ECs only because I'm talking about gun risks here; "cultural availability" applies just as much to individuals with hierarchical individualistic -- "HI" -- & other competing cultural predispositions, and is part of what drives cultural polarization over what scientific consensus is on issues like climate change and nuclear power as well as guns.

But in any case, the same dynamics also result in ECs ignoring stories that disappoint their expectations about the risks that guns propose.  As HIs emphasize, guns also sometimes are used defensively to ward off a violent attack, and in this sense can be expected to reduce the risk of violence to vulnerable people (children, but also women and minorities, who are disproportionately victimized). 

The actual prevalence of so-called "defensive use" of guns is (unsurprisingly) a matter that is subject to considerable debate, both among gun activists & among empirical researchers.

Nevertheless, there are lots of stories out there, in the media and in social media, that fit this account.  But ECs are (the cultural availability effect predicts) much less likely to take note of, assign significance to, and recall stories that support the conclusion that guns are sometimes used to protect life and thus likely systematically to underestimate defensive uses. They will then dismiss as specious the argument that there is this off-setting effect to take into consideration when addressing the impact of gun regulations. Of course, HIs can be expected to fixate on such stories -- with the help of an obliging media (like, say, Fox News or Fox network local affiliates) -- and thus overestimate both the frequency of defensive uses and the burden that gun regulations would place on use of guns for lawful self-defense.

Example ... This video of a news story reporting an 11 year-old girl's brave confrontation with household intruders whom she scared off with-- you guessed it -- a Crickett (or equivalent; it's not the only product of this sort).  One with a fetching pink-colored rifle stock designed to appeal to girls (or to HI parents of girls eager to fight "sexism" by making roles featuring honor norms available to their daughters as well as their sons).

Brave girl defense home against intruder with Crickett! (Don't worry: it's "soft fire," the mfr tells us in its own video, meaning minimal recoil, reducing risk of shoulder separation)

The story aired on a Fox affiliate local news progam, of course! (Check out the icon for jbranstter04, who uploaded it; what do you think his -- or her? -- cultural orientation might be?)

So, ECs are unlikely to see it. If they do, they will roll their eyes and dismiss it as absurd.

But if they can get to the end-- and can force themselves to pay attention! -- they'll find (I'm sure) the bit of information that they need, too, to reconcile what they've been forced to observe with what they already know is the truth about how the world works.

It turns out the "intruders" were people who knew the family -- and who broke in to steal their cache of guns. 

Seriously, one can't invent material this good.


Who is disgusted by kids' "toy" guns & drones, and why?

I was reflecting on the "disgust and revulsion" occassioned by "the Crickett"--a (slightly) minaturized but fully authentic, functional .22 rifle that is marketed for children ("my first rifle!"), one of which figured in the widely reported fatal shooting of a 2-year-old by her 5-year-old brother (the Crickett "owner") in Kentucky.

That got me to thinking about the links between cultural styles, the role of technological objects in expressing and propagating them, and the way in which emotions figure both in the value (or disvalue) we attach to such objects and the risks (or benefits) we see those objects as posing (or conferring)....

I thought maybe I'd write about this but I was not sure exactly how to put things or exactly what I think anyway. Actually, those problems rarely stop me, but still, I thought I'd try something else that might both communicate my apprehension of the phenomenon and motivate others to try to help make me sense of it.


I admit that I am disgusted by the Crickett (I admit, too, that I'm slightly concerned about why, and about the challenge this reaction creates for me in trying to see things in a fair and impartial light and to deal with others in a respectful and tolerant way).  

But the Bumblebee "first drone" strikes me (so to speak) as wondrous and beautiful--and a brilliant child's toy! Indeed, I'd very much like one myself.

One of the reasons I can't get one is that it doesn't exist--yet. But I'm sure someone-- someone else who followed this week's less widely heralded reporting on the progress of Harvard University's "Robobee" project-- is working on it. (You can get the Crickett, or at least could until a couple days ago; the "newsletter" for is real & was captured from the internet before the recent Kentucky shooting, after which the company shut down its internet site.)

At the same time, I know that the Bumblebee-- and the anticipated companion "first drones" that its manufacturer has in the works--will fill many with horror, revulsion, disgust. As a result, it will fill them with fear of all the harmsto public safety, to privacy, and to other goods--that private drones pose.

Is that part of what I like about the Bumblebee? I don't think so; I sure hope not, in fact.

But knowing they feel this way almost fills me with resolve to buy one for myself, and another two or three for holiday gifts and birthday presents for children whose families I know will want them to grow up sharing their fascination and wonder for science, technology, and human ingenuity . . . .

A while back, I posted a 2-part series "Who are these guys?," which responded to Jen Brisseli's request for a more vivid picture of the sorts of people who subscribe to the cultural styles defined by the "cultural cognition worldview" framework.  

This post is in the spirit of that, I think. Indeed, I think it is in the spirit of how Jen Brisseli wants to promote reflection on science generally with her "designing science" conception of science communication--this way of proceeding likely occurred to me b/c I have had the benefit of reflecting on what she is up to.

But now my question is this: who would be filled with appreciation and passion, and who with revulsion & disgust, by these "toys"?  And why?  Who are these guys?

In this regard -- and getting back to the form of inquiry and communication that I usually use to address such matters -- it's interesting to consider perceptions of technology risks.

In one CCP study of nanotechnology risk perceptions, we found that there was no cultural division over its risks and benefits generally. Not surprising, since 80% of the subjects had no idea what it was.

But when we exposed another group of subjects to a small amount of scientifically accurate, balanced information on nanotechnology risks and benefits, those individuals polarized along lines consistent with cultural predispositions associated with pro- and anti-technology outlooks.

The cultural group that credited the information about nanotechnology benefits and discounted the information about risks, moreover, was generally hierarchical and individualistic in orientation.  People with these outlooks are generally skeptical of environmental risks--ones relating to nuclear power and climate change, e.g.

But they also are the ones most predisposed to see gun risks as low--and see the risks associated with excessive control, including impairment of lawful self-defense, as high.  They believe, too, that empirical evidence compiled by scientists backs them up on this, and that their views on both climate change and nuclear power are also consistent with scientific consensus.

Egalitarian communiatarian subjects are generally very sensitive to technology risks -- they worry a lot about both climate change and nuclear power.  

They also are sickened by guns. They find them disgusting.  And consistent with cultural cognition they see guns as extremely risky, and gun control as extremely effective--and believe that empirical evidence compiled by scientists back them up on this, just as such evidence backs up their views about environmental and technological risks

I bet people who buy "the Crickett" for their young children are mainly hierarchical and individualist. Does that mean they would also like the Bumblebee?

Kids having a blast with their Cricketts!Would egalitarian communitarian, who I'm sure tend to be very disturbed by the Crickett, think the Bumblebee is also an abomination? And of course a tremendous risk to public safety and various other elements of well-being?

I sort of think that this conclusion isn't really right. That it's too simple....

"Group-grid," as my collaborators and I conceive of it at least, is a model.  All models are simplifications. Simplifying models are useful. But they also are necessarily false. 

If the insight that is enabled by simplifying complicated true things outweighs the distortion associated with what is necessarily false about simplifying them, then a model advances understanding.

But even a model that advances understanding in this way with respect to some issues or for a period of time can become one that doesn't advance understanding -- because what is false about it obscures insight into complicated things that are true -- with respect to some other set of issues, or with respect even to the same ones at a later time .... 

Anyway, I plan to keep my eye on drones.  I think they are or can be beautiful.  But I know that they also sicken and disgust others.  Who? and Why?



Who sees accidental shootings of children as evidence in support of gun control & why? The "cultural availability" effect

I don’t really like guns much.  I also hate to get wet, so rarely go swimming.

But what I do like to do -- because it is an instance of the sort of thing I study -- is think about why accidental shootings of young children (a) get so much media coverage relative to the other things that kill children; and (b) are—or, more likely, are thought—to be potent occasions for drawing public attention to the need for greater regulation of firearms.

Consider guns vs. (what else?!) swimming pools (if the comparison is trite, don’t blame me; blame the dynamics that make people keep resisting what the comparison illustrates about culture and cognition). 

  • Typically there are < 1,000 (more like 600-800) accidental gun homicides in US per yr. About 30 of those are children age 5 or under. 

 I think background checks of the sort “defeated” in US Senate (because passed by a majority that wasn’t big enough; I need a civics refresher course on how congress works...) would be a good idea.  I also would support ban on “assault rifles.”

But it’s obvious, to anyone who reflects on the matter if not to those who don't, that the incidence of the accidental shootings of children adds zero weight to the arguments that can be made in support of those policies.

Also obvious that neither of these policies—or any of the other even more ambitious ones that gun control advocates would like to enact (like bans on carrying of concealed weapons)-- would reduce the deaths of young kids by nearly as much as many many many other things. I’m not thinking of banning swimming pools, actually; but how about, say, ending the “war on drugs,” which indisputably fuel deadly forms of competition to reap the super-competitive profits that a black market affords?

The pool comparison, though, does show how the “culture war” over guns creates not only a very sad deformation of political discourse but also a weird selectively attention to empirical evidence, and a susceptibility to drawing unconvincing inferences from it.

Like I said, I like to think about these things.

One way to understand cultural cognition is that it shows how cultural values interact with more general psychological dynamics that shape perceptions of risk. 

One of these is the “availability effect,” which refers to the tendency of people to overestimate the incidence of risks involving highly salient or emotionally gripping events relative to less salient, less sensational ones.  We might explain why people seem so much more concerned about the risk of an accidental shooting of a child than the accidental drowning of one.

But the explanation is not satisfying because it begs the question of what accounts for the selective salience of various risks—what makes some but not others gripping enough to get our attention, or to get the attention of those who make a living showing us attention-grabbing things?  Cultural cognition theory says the cultural congeniality of seeing instances of harm that gratify one’s cultural predispositions. 

Moreover, because predispositions are heterogeneous, we should expect the “cultural availability effect” to generate systematic differences in perceptions of risk among people with different values.  In this case, it is the people whose values predispose them to feel “revulsion and disgust” (see the news story in my graphic) that have their attention drawn to accidental shootings of children and who treat them as evidence that the failure to enact background checks, assault rifle bans, etc., is increasing homicide.

On that note, a footnote from a paper that discusses this aspect of the theory of cultural cognition:

In one scene of Michael Moore’s movie Bowling for Columbine, the “documentary” team rushes to get footage from the scene of a reported accidental shooting only to discover when they arrive that television news crews are packing up their gear. “What’s going on? Did we miss it,” Moore asks, to which one of the departing TV reporters answers, “no, it was a false alarm—just a kid who drowned in a pool.” One would suspect Moore of trying to make a point—that the media’s responsiveness to the public obsession with gun accidents contributes to the public’s inattention to the greater risk for children posed by swimming pools—if the movie itself were not such an obvious example of exactly this puzzling, and self-reinforcing distortion. Apparently, it was just one of those rare moments when 1,000 monkeys mindlessly banging on typewriters (or editing film) surprise us with genuine literature.


Even *more* Q & A on "cultural cognition scales" -- measuring "latent dispositions" & the Dake alternative

Given how interesting the conversations were in the last two “Q&A” posts (here & here), I thought—heck, why not another. 

Here are a set of reflections in response to an email inquiry from a thoughtful person who wanted to understand what it means to treat the cultural worldview scales as “latent” measures of cultural dispositions, and why we—my collaborators & I in the Cultural Cognition Project—thought it necessary to come up with alternatives to the scales that Karl Dake initially formulated to test hypotheses relating to Douglas & Wildavsky’s “cultural theory of risk.” For elaboration, see Kahan, Dan M. "Cultural Cognition as a Conception of the Cultural Theory of Risk." Chap. 28 In Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, edited by R. Hillerbrand, P. Sandin, S. Roeser and M. Peterson. 725-60: Springer London, Limited, 2012.

Question: What do you mean when you say the "cultural cognition worldview scales" measure a "latent variable"? And that they "work better" than Dake's scales in this regard?

My answer:

(A) Let's hypothesize that there is inside each member of a group an unobserved & unobservable thing -- which we'll call that group's cultural predisposition -- that interacts with the mental faculties and processes by which that person processes information in a way that tends to bring his or her perceptions of risk into alignment with those of ever other member of the group. This would be an explanation (or part of one, at least) for "the science communication problem"-- the failure of valid, compelling, widely available scientific evidence to resolve political conflict over risks and other facts to which that evidence speaks.

(B) Although we can't observe cultural dispositions directly, we might still be able to make valid inferences about their existence & nature by identifying observable things that we would expect to correlate with them if the predispositions exist and if they have the nature that we might hypothesize they do. We had reason to believe that atoms existed long before they were "seen" under a scanning tunneling microscope because Einstein demonstrated that their existence would very precisely explain the observable (and until then very mysterious!) phenomenon of Brownian motion (in fact, we only "see" atoms with an ST microscope b/c we accept that the observable images they produce are best explained by atoms, which of course remain unobservable no matter what apparatus we use to "look" at them). Similarly, we might treat certain patterns of responses among a group's members as evidence that the predispositions exist and behave a certain way if such conclusions furnish a more likely explanation for those patterns than other potential causes and if we would not expect to see the patterns otherwise.  Within psychology, this is known as a "latent variable" measurement strategy, in which "manifest" or observable "indicators"--here the patterns of responses -- are used to measure a posited "latent" or unobserved variable --"cultural predispositions" in our case.

(C) That's what the items in our cultural worlscales are -- indicators of the latent cultural predispositions that we hypothesize explain the science communication problem. The scales reflect a theory that people would not be expected to respond to the statements the items comprise in patterns that sort individuals out along two continuous, cross-cutting dimensions unless people had "inside" of them group predispositions that correspond to "hierarchy individualism," "hierarchy communitarianism," "egalitarian individualism," and "egalitarian communitarianism."  On this view, responses are understood to be "caused" by the predispositions. The causal influence is only crudely understood and thus only imprecisely measured by each item; the whole point of having multiple ones is to aggregate responses to them, a process that will make the "noise" associated with their imprecision balance or cancel out & thus magnify the "signal" associated with them.  The resulting scales can be viewed as "measuring" the intensity of the unobserved predispositions.

(D) For this strategy for "observing" or "measuring" cultural predispositions to be valid, various things must be true.  The most basic one is that the items assigned to the scales must "perform" as the underlying theory posits.  The responses to them must correlate with each other in ways that generate the pattern one would expect if they are indeed "measuring" the cultural predispositions.  If the items correlate in some other pattern, the scales are not a 'valid" measure of the posited dispositions.  If they correlate in the expected pattern, but the correlations are very weak, then the scales can be viewed as "unreliable," which refers to the degree of precision by which an instrument measures whatever quantity it is supposed to be measuring (imagine that your bathroom scale had some sort of defect and as a result gave readings that erratically over- or underestimated people's weight; it wouldn't be very reliable in that case).

(E) The Dake scales did not perform well.   They were not reliable; they didn't correlate with *one another* as one would expect if the ones that were placed in the same scale were measuring the same thing. Moreover, to the extent that they seemed to measuring things "inside" people, those things did not fit expectations one would form about their relationship under the theory posited by the "cultural theory of risk." 

(F) Once one has valid & reliable scales, one does not yet have evidence that cultural predispositions explain the science communication problem.  Rather one has measures of what one is prepared to regard as cultural predispositions.  At that point, one must devise studies geared to generating correlations between the predispositions, as measured by the valid and reliable scales, and risk perceptions, as measured in some appropriate way.  Those correlations must be of a sort that one would expect to see if the predispositions are causing risk perceptions in the way one hypothesizes but would not expect to see otherwise. 



Deja voodoo: the puzzling reemergence of invalid neuroscience methods in the study of "Democrat" & "Republican Brains"

I promised to answer someone who asked me what I think of Schreiber, D., Fonzo, G., Simmons, A.N., Dawes, C.T., Flagan, T., Fowler, J.H. & Paulus, M.P. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans, PLoS ONE 8, e52970 (2013).

The paper reports the results of an fMRI—“functional magnetic resonance imagining”— study that the authors describe as showing that “liberals and conservatives use different regions of the brain when they think about risk.” 

They claim this finding is interesting, first, because, it “supports recent evidence that conservatives show greater sensitivity to threatening stimuli,” and, second, because it furnishes a predictive model of partisan self-identification that “significantly out-performs the longstanding parental model”—i.e., use of the partisan identification of individuals’ parents.

So what do I think?  Not much, frankly.

Actually, I think less than that: the paper supplies zero reason to adjust any view I have—or anyone else does, in my opinion—on any matter relating to individual differences in cognition & ideology.

To explain why, some background is necessary.

About 4 years ago the burgeoning field of neuroimaging experienced a major crisis. Put bluntly, scores of researchers employing fMRI for psychological research were using patently invalid methods—ones the defects in which had nothing to do with the technology of fMRIs but rather with really simple, basic errors relating to causal inference.

The difficulties were exposed—and shown to have been present in literally dozens of published studies—in two high profile papers: 

1.   Vul, E., Harris, C., Winkielman, P. & Pashler, H. Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition, Perspectives on Psychological Science 4, 274-290 (2009); and

2.   Kriegeskorte, N., Simmons, W.K., Bellgowan, P.S.F. & Baker, C.I. Circular analysis in systems neuroscience: the dangers of double dipping, Nature Neuroscience 12, 535-540 (2009).

The invalidity of the studies that used the offending procedures (ones identified by these authors through painstaking detective work, actually; the errors were hidden by the uninformative and opaque language then typically used to describe fMRI research methods) is at this point beyond any dispute.

Not all fMRI studies produced up to that time displayed these errors. For great ones, see any done (before and after the crisis) by Joshua Greene and his collaborators.

Today, moreover, authors of “neuroimaging” papers typically take pain to explain—very clearly—how the procedures they’ve used avoid the problems that were exposed by the Vul et al. and Kriegeskorte et al. critiques. 

And again, to be super clear about this: these problems are not intrinsicto the use of fMRI imaging as a technique for testing hypotheses about mechanisms of cognition. They are a consequence of basic mistakes about when valid inferences can be drawn from empirical observation.

So it’s really downright weird to see these flaws in a manifestly uncorrected form in Schreiber et al.

I’ll go through the problems that Vul et al. & Kriegeskorte et al. (Vul & Kriegeskorte team up here) describe, each of which is present in Schreiber et al.

1.  Opportunistic observation. In an fMRI, brain activation (in the form of blood flow) is measured within brain regions identified by little three dimensional cubes known as “voxels.” There are literally hundreds of thousandsof voxels in a fully imaged brain.

That means there are literally hundreds of thousands of potential “observations” in the brain of each study subject. Because there is constantly varying activation levels going on throughout the brain at all time, one can always find “statistically significant” correlations between stimuli and brain activation by chance. 

click me! I'm a smart fish!This was amusingly illustrated by one researcher who, using then-existing fMRI methodological protocols, found the region that a salmon cleverly uses for interpreting human emotions.  The salmon was dead. And the region it was using wasn’t even in its brain.

Accordingly, if one is going to use an fMRI to test hypotheses about the “region” of the brain involved in some cognitive function, one has to specifyin advance the “region of interest” (ROI) in the brain that is relevant to the study hypotheses. What’s more, one has to carefully constrain one’s collection of observations even from within that region—brain regions like the “amygdala” and “anterior cingulate cortex” themselves contain lots of voxels that will vary in activation level—and refrain from “fishing around” within ROIs for “significant effects.”

Schreiber et al. didn’t discipline their evidence-gathering in this way.

They did initially offer hypotheses based on four precisely defined brain ROIs in "the right amygdala, left insula, right entorhinal cortex, and anterior cingulate."

They picked these, they said, based on a 2011 paper (Kanai, R., Feilden, T., Firth, C. & Rees, G. Political Orientations Are Correlated with Brain Structure in Young Adults. Current Biology 21, 677-680 (2011)) that reported structural differences—ones, basically, in the size and shape, as opposed to activation—in theses regions of the brains of Republican and Democrats.

Schreiber et al. predicted that when Democrats and Republicans were exposed to risky stimuli, these regions of the brain would display varying functional levels of activation consistent with the inference that Repubicans respond with greater emotional resistance, Democrats with greater reflection. Such differences, moreover, could also then be used, Schreiber et al. wrote, to "dependably differentiate liberals and conservatives" with fMRI scans.

But contrary to their hypotheses, Schreiber et al. didn’t find any significant differences in the activation levels within the portions of either the amygdala or the anterior cingulate cortex singled out in the 2011 Kanai et al. paper. Nor did Schreiber et al. find any such differences in a host of other precisely defined areas (the "entorhinal cortex," "left insula," or "Right Entorhinal") that Kanai et al. identified as differeing structurally among Democrats and Republicans in ways that could suggest the hypothesized differences in cognition.

In response, Schreiber et al. simply widened the lens, as it were, of their observational camera to take in a wider expanse of the brain. “The analysis of the specific spheres [from Kanai et al.] did not appear statistically significant,” they explain,” so larger ROIs based on the anatomy were used next.”

Using this technique (which involves creating an “anatomical mask” of larger regions of the brain) to compensate for not finding significant results within more constrained ROI regions specified in advance amounts to a straightforward “fishing” expedition for “activated” voxels.

This is clearly, indisputably, undeniably not valid.  Commenting on the inappropriateness of this technique, one commentator recently wrote that “this sounds like a remedial lesson in basic statistics but unfortunately it seems to be regularly forgotten by researchers in the field.”

Even after resorting to this device, Schreiber et al. found “no significant differences . . .  in the anterior cingulate cortex,” but they did manage to find some "significant" differences among Democrats' and Republicans' brain activation levels in portions of the “right amygdala” and "insula."

2.  “Double dipping.”Compounding the error of opportunistic observation, fMRI researchers—prior to 2009 at least—routinely engaged in a practice known as “double dipping.” After searching for & zeroing in on a set of “activated” voxels, the researches would then use those voxels and only those to perform statistical tests reported in their analyses.

This is obviously, manifestly unsound.  It is akin to running an experiment, identifying the subjects who respond most intensely to the manipulation, and then reporting the effect of the manipulation only for them—ignoring subjects who didn’t respond or didn’t respond intensely. 

Obviously, this approach grossly overstates the observed effect.

Despite this being understood since at least 2009 as unacceptable (actually, I have no idea why something this patently invalid appeared okay to fMRI researchers before then), Schreiber et al. did it. The “[o]nly activations within the areas of interest”—i.e., the expanded brain regions selected precisely because  they contained voxel activations differing among Democrats and Republicans—that were “extracted and used for further analysis,” Schreiber et al. write, were the ones that “also satisfied the volume and voxel connection criteria” used to confirm the significance of those differences.

Vul called this technique “voodoo correlations” in a working paper version of his paper that got (deservedly) huge play in the press. He changed the title—but none of the analysis or conclusions in the final published version, which, as I said, now is understood to be 100% correct.

3.  Retrodictive “predictive” models. Another abuse of statistics—one that clearly results in invalid inferences—is to deliberately fit a regression model to voxels selected for observation because they display the hypothesized relationship to some stimulus and then describe the model as a “predictive” one without in fact validating the model by using it to predict results on a different set of observations.

Vul et al. furnish a really great hypothetical illustration of this point, in which a stock market analyst correlates changes in the daily reported morning temperature of a specified weather station with daily changes in value for all the stocks listed on the NYSE, identifies the set of stocks whose daily price changes are highly correlated with the station's daily temperature changes, and then sells this “predictive model” to investors. 

This is, of course, bogus: there will be some set of stocks from the vast number listed on the exchange that highly (and "significantly," of course) correlate with temperature changes through sheer chance. There’s no reason to expect the correlations to hold going forward—unless (at a minimum!) the analyst, after deriving the correlations in this completely ad hoc way, validates the model by showing that it continued to successfully predict stock performance thereafter.

Before 2009, many fMRI researchers engaged in analyses equivalent to what Vul describes. That is, they searched around within unconstrained regions of the brain for correlations with their outcome measures, formed tight “fitting” regressions to the observations, and then sold the results as proof of the mind-blowingly high “predictive” power of their models—without ever testing the models to see if they could in fact predict anything.

Schreiber et al. did this, too.  As explained, they selected observations of activating “voxels” in the amygdala of Republican subjects precisely because those voxels—as opposed to others that Schreiber et al. then ignored in “further analysis”—were “activating” in the manner that they were searching for in a large expanse of the brain.  They then reported the resulting high correlation between these observed voxel activations and Republican party self-identification as a test for “predicting” subjects’ party affiliations—one that “significantly out-performs the longstanding parental model, correctly predicting 82.9% of the observed choices of party.”

This is bogus.  Unless one “use[s] an independent dataset” to validate the predictive power of “the selected . . .voxels” detected in this way, Kriegeskorte et al. explain in their Nature Neuroscience paper, no valid inferences can be drawn. None.

BTW, this isn’ta simple “multiple comparisons problem,” as some fMRI researchers seem to think.  Pushing a button in one’s computer program to ramp up one’s “alpha” (the p-value threshold, essentially, used to avoid “type 1” errors) only means one has to search a bit harder; it still doesn’t make it any more valid to base inferences on “significant correlations” found only after deliberately searching for them within a collection of hundreds of thousands of observations.

The 2011 Kanai et al. structural imaging paper that Schreiber et al. claim to be furnishing “support” for didn’t make this elementary error. I’d say “to their credit,” except that such a comment would imply that researchers who use valid methods deserve “special” recognition. Of course, using valid methods isn’t something that makes a paper worthy of some special commendation—it’s normal, and indeed essential.

* * *

One more thing:

I did happen to notice that the Schreiber et al. paper seems pretty similar to a 2009 working paper they put out.  The only difference appears to be an increase in the sample size from 54 to 82 subjects. 

Also some differences in the reported findings: in their 2009 working paper, Schreiber et al. report greater “bilateralamygdala” activation in Republicans, not “right amygdala” only.  The 2011 Kanai paper that Schreiber et al. describe their study as “supporting,” which of course was published after Schreiber et al. collected the data reported in their 2009 working paper, found no significant anatomical differences in the “left amygdala” of Democrats and Republicans.

So, like I said, I really don’t think much of the paper.

What do others think?



Look, everybody: more Time-Sharing Experiments for the Social Sciences (TESS)!

Below a very welcome announcement from Jeremy Freese and Jamie Druckman -- & forwarded to me by Kevin Levay -- on the continued funding of TESS, which administers accepted study designs free of charge to a stratified on-line sample.

Actually, I'm going to do a post soon -- very soon! -- on use of on-line samples (& in particular on growing use of Mechanical Turk). Suffice it to say that if you can get a study conducted by TESS, you've got yourself an A1 sample -- for free!!

We are pleased to announce that Time-Sharing Experiments for the Social Sciences (TESS) was renewed for another round of funding by NSF starting last Fall. TESS allows researchers to submit proposals for experiments to be conducted on a nationally-representative, probability-based Internet platform, and successful proposals are fielded at no cost to investigators.  More information about how TESS works and how to submit proposals is available at 

Additionally, we are pleased to announce the development of two new proposal mechanisms. TESS’s Short Studies Program (SSP) is accepting proposals for fielding very brief population-based survey experiments on a general population of at least 2000 adults. SSP recruits participants from within the U.S. using the same Internet-based platform as other TESS studies. More information about SSP and proposal requirements is available at 

TESS’s Special Competition for Young Investigators is accepting proposals from June 15th-September 15th. The competition is meant to enable younger scholars to field large-scale studies and is limited to graduate students and individuals who are no more than 3 years post-Ph.D. More information about the Special Competition and proposal requirements is available at 

For the current grant, the principal investigators of TESS are Jeremy Freese and James Druckman of Northwestern University, who are assisted by a new team of over 65 Associate PIs and peer reviewers across the social sciences. More information about our APIs is available at

James Druckman and Jeremy Freese

Principal Investigators, TESS


"Yes we can--with more technology!" A more hopeful narrative on climate?

Andy Revkin (the Haile Gebrselassie of environmental science journalism) has posted a guest-post on his blog by Peter B. Kelemen, the Arthur D. Storke Professor and vice chair in the Department of Earth and Environmental Sciences at Columbia University.

The essay combines two themes, basically.

One is the "greatest-thing-to-fear-is-fear-itself" claim: apocalyptic warnings are paralyzing and hence counterproductive; what's needed to motivate people is "hope."

That point isn't developed that much in the essay but is a familiar one in risk communication literature -- and is often part of the goldilocks dialectic that prescribes "use of emotionally compelling images" but "avoidance of excessive reliance on emotional images" (I've railed against goldilocks many times; it is a pseudoscience story-telling alternative to the real science of science communication).

But the other theme, which is the predominant focus and which strikes me as really engaging and intriguing, is that in fact "apocalypse" is exceedingly unlikely given the technological resourcefulness of human beings.

We should try to figure out the impact of human behavior that generates adverse climate impacts and modify them with feasible technological alternatives that themselves avoid economic and like hardships, Kelemen argues. Plus, to the extent that we decide to continue in engaging in behavior that has adverse impacts, we should anticipate that we will also figure out technological means of offsetting or dealing with the impacts. 

Kelemen focuses on carbon capture, gas-fired power plants, etc.

The policy/science issues here are interesting and certainly bear discussion.

But what captures my interest, of course, is the "science communication" significance of the "yes we can--with more technology" theme.  Here are a couple of points about it:

1. This theme is indeed likely to be effective in promoting constructive engagement with the best evidence on climate change.  The reason isn't that it is "hopeful" per se but that it avoids antagonistic meanings that trigger reflexive closed-mindedness on the part of individuals--a large segment of the population, in fact-- who attach high cultural value to human beings' technological resourcefulness and resilience.

from Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).CCP has done two studies on how making technological responses to climate change --such as greater reliance on nuclear power and exploration of geoengineering -- more salient helps to neutralize dismissive engagement with and thus reduce polarization over climate science.

These studies, by the way, are not about how to make people believe particular propositions or support particular policies (I don't regard that as "science communication" at all, frankly).  The outcome measures involve how reflectively and open-mindedly subjects assess scientific evidence.

2. Nevertheless, the "yes we can--with technology" theme is also likely to generate a push-back effect. The fact is that "apocalyptic" messaging doesn't breed either skepticism or disengagement with that segment of the population that holds egalitarian and communitarian values. On the contrary, it engages and stimulates them, precisely because (as Douglas & Wildavsky argue) it is suffused with cultural meanings that fit the moral resentment of markets, commerce, and industry.

For exactly this reason, individuals with these cultural dispositions predictably experience a certain measure of dissonance when technological "fixes" for climate impacts are proposed: "yes we can--with technology" implies that the solution to the harms associated with too much commerce, too great a commitment to markets, too much industrialization etc is not "game over" but rather "more of the same."  

Geoengineering and the like are "liposuction" when what we need is to go on a "diet."

How do these dynamics play out?

Well, of course, the answer is, I'm not really sure. 

But my conjecture is that the positive contribution of the "yes we can --with technology" narrative can make to promoting engagement with climate science will offset any push back effect. Most egalitarian communitarians are already plenty engaged with the issue of climate and are unlikely to "tune out" if technological responses other than carbon limits become an important part of the conversation.  There will be many commentators who conspicuously flail against this narrative, but their reactions are not a good indicator of how the "egalitarian communitarian" rank and file are likely to react. Indeed, pushing back too hard, in a breathless, panicked way will likely make such commentators appear weirdly zealous and thus undermine their credibility with the largely nonpartisan mass of citizens who are culturally disposed to take climate change seriously.

Or maybe not. As I said, this is a conjecture, a hypothesis.  The right way to figure the question out isn't to tell stories but rather to collect evidence that can help furnish an answer.


How many times do I have to explain?! "Facts" aren't enough, but that doesn't mean anyone is "lying"!

Receiving email like this is always extremely gratifying, of course, because it confirms for me that our "cultural cognition" research is indeed connecting with a large number of culturally diverse people. At the same time, it is frustrating to see how these readers fundamentally misunderstand our studies. I guess when you are so deeply caught up in a culturally contested question like this one, it is just really hard to get that screaming "the facts! the facts! Stop lying!!!," isn't going to promote constructive public engagement with the best available scientific evidence.



More US science literary data -- from Pew (an organization that definitely knows how to study US science attitudes)

The Pew Research Center has a new report out on US science attitudes & science knowledge.  I haven't read it yet but look forward too--when I get through a crunch of 4,239 other things--because Pew does great surveys generally & super great public opinion work on US public & science, a matter I've discussed before.

Maybe in the meantime one of the billions of generous, public-spirited, and insatiably curious (and opinionated!) readers of this blog will read carefully & report on contents for us.

Another thing I plan to get to, moreover, is the absurd "US is science illiterate/anti-science comparted to 'rest of developed world' meme."  Patently false.  Really interesting to try to figure out the source of the intense motivation to say and believe this...


"The qualified immunity bar is not set that low..."

Despite appearances, Scott v. Harris does not stand for the proposition that "reasonable" jurors are constrained to "see" any fleeing driver as a lethal risk against whom the police can necessarily apply deadly force.

Or so concludes a very reasonable jurist.



Still more Q & A on "cultural cognition scales" -- and on the theory behind & implications of them

I was starting to formulate a contribution to some of the great points made in discussion of the post on Q&A on "cultural cogniton" scales" & figured I might as well post the response. I encourage others to read the comments--you'll definitely learn more from them than from what I'm saying here, but maybe a marginal bit more still if you read my contribution in addition to those reflections. And almost certainly more still if others are moved by what I have to say here to refine and extend the arguments that were being presented in there.  Likely too it would make sense for the discussion to continue in comments to this post, if there is interest in continuing.

1. Whence predispositions, and the revision of them

How does this theory then explain the change from one group identity to another? You don't argue that such change doesn't occur, I see, since you say that there's "no reason why individuals can't shift & change w/ respect to them" -- but why isn't there such a reason, since you've given a good phenomenological description of the group pressures brought to bear on individuals to keep them in the herd, so to speak?

I don't really know how people form or why they change the sorts of affinity-group commitments that will result in sorts of dispositions we can measure w/ the cultural worldview scales.  My guess is that the answer is the same as one that one would give about why people form & change the sorts of orientations that are connected to religious identifications & ideological or political ones: social influences of various sorts, most importantly family & immediate community growing up; some possibility of realignment upon exposure at an impressionable period of life (more typically college age than adolescence or earlier) to new perspectives & new, compelling sources of affinity; thereafter usually nothing of interest, & lots of noise, but maybe some traumatic life experience etc.

Question I'd put back is: why is this important given what I am trying to do? I want to explain, predict, and formulate constructive prescriptions relating to conflict over science relevant to individual & collective decisionmaking. Knowing that the predispositions in question are important to that means it is important to be able to measure them.  But it doesn't mean, necessarily, that I need a good account of whence the predispositions, or of change -- so long as I can be confident (as I am) that they are relatively stable across the population. 

I suppose someone could say, "you should have a theory of the “whence & reformation of” predispositions b/c you might then be able to identify strategies for shaping them as a means of averting conflict/confusion over science" etc.  But I find that proposition (a) implausible (I think I know enough to know that regulating formation of such affinities is probably not genuinely feasible) & more importantly (to me) (b) a moral/political nonstarter: in a liberal society, it is not appropriate to make formation of people's values & self-defining affinities a conscious object of govt action.  On the contrary, it is one of the major aims of the "political science of democracy" (in Tocqueville's sense) to figure out how to make it possible for a community of diverse citizens to realize their common interest in knowing what's known without interfering with their diversity.

2. On change in how groups with particular predispositions engage or assess risks

And a related question would be: how do the group perceptions of risk themselves change over time? Ruling out mystical or telepathic bonds between group members, how does a change get started, who starts it, and how or where do those starters derive their perception of risk? (Consider, e.g., nuclear power.)

There is an account of this in "the theory." 

The "cultural cognition thesis" says that "culture is prior" -- cognitively speaking --" to facts."  That is, individuals can be expected to engage information in a manner that conforms understanding of facts to conclusions the cultural meanings of which are affirming to their cultural identities. 

So when a putative risk source -- say, climate change or guns or HPV or nuclear  or cigarettes-- becomes infused with antagonistic meanings, “pouring more information” on the conflagration won’t staunch it; it will likely only enflame

Instead, one must do something that alters the meanings, so that positions are no longer seen as uniquely tied to cultural identities.  At that point, people will not face the same psychic pressure that can induce them (all the more so when they are disposed to engage in analytical, reflective engagement with information!) to reject scientific evidence on any position in a closed-minded fashion.

Will groups change their minds, then? Likely someone will; or really, likely there will be convergence among persons with diverse views, since like all members of a liberal market society they share faculties for reliably recognizing the best available scientific evidence, and at that point those faculties no longer will be distorted or disabled by the sort of noise or pollution created by antagonistic cultural meanings.

Examples? For ones in the world, consider discussions (of cigarettes, of abortion in France, of air pollution in US, etc.) in these papers:

The Cognitively Illiberal State, 60 Stan. L. Rev. 115 (2007)

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk, 119 Harv. L. Rev.1071 (2006) (with Paul Slovic, John Gastil & Donald Braman)

Cultural Cognition and Public Policy, 24 Yale L. & Pol'y Rev. 149 (2006) (with Donald Braman)

For an experimental “model” of this process, see our paper on geoengineering & the “two-channel” science communication strategy:

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

And for more still on how knowing why there is cultural conflict can help to fashion strategies that dispel sources of conflict & enable convergence, see

Is cultural cognition a bummer? Part 1

3.  What about the “objective reality of risk” as opposed to the cultural cognition of it?

These questions themselves derive from a sense I have that the group-identity theory of risk perception is not wrong but incomplete, and the area in which it's incomplete is of major importance in addressing any theory of communication to do with risk -- that area is the objective reality of risk, as determined not by group adherence, and not by authority (even the authority of a science establishment), but rather by evidence and reason.

To start, of course the theory is “incomplete”; anyone who thinks that any theory ever is “complete” misunderstands science’s way of knowing! Also misunderstands something much more mundane—the limited ambition of what the ‘cultural cognition’ framework aspires to, which is a more edifying and empowering understanding of the “science communication problem,” which I think one can have w/o having much to say about many things of importance.

But the “theory” as it is does have a position, or least an attitude, about the “reality” of the knowledge confusion over which is the focus of the “science communication problem.”  The essence of the attitude comes down to this:

a. Science’s way of knowing—which treats as entitled to assent (and even that only provisionally) conclusions based on valid inference from valid empirical observation—is the only valid way to know the sorts of things that admit of this form of inquiry. (The idea that things that don’t admit of this form of inquiry can’t be addressed in a meaningful way at all is an entirely different claim and certainly not anything that is necessary for treating science’s way of knowing as authoritative within the domain of the empirically observable; personally, I find the claim annoyingly scholastic, and the people who make it simply annoying.)

b. People, individually & collectively, will be better off if they rely on the best available scientific evidence to guide decisions that depend on empirical assumptions or premises relating to how the world (including the social world) works.

c. In the US & other liberal democratic market societies—the imperfect instantiations of the Liberal Republic of Science as a political regime—people of all cultural outlooks in fact accept that science’s way of knowing is authoriative in this sense & also very much want to be guided by it in the way just specified.

d. Those who accept the authority of science & who want to be guided by it will necessarily have to accept as known by science much much more than they could ever hope to comprehend in a meaningful sense themselves. Thus their prospects for achieving their ends in these regards depends on their forming a reliable ability to recognize what’s known to science.  The citizens of the Liberal Republic of Science have indeed developed this faculty (and it is one that is very much a faculty that consists in the exercise of reason; it is an indispensable element of “rationality” to be able reliably to recognize who knows what about what).

e. The process of cultural cogniton, far from being a bias, is part of the recognition faculty that diverse individuals use reliably to recognize what is known by science.

f. The “science communication problem” is a consequence of conditions that disable the reliable exercise of this faculty.  Those conditions involve the entanglement of empirical propositions with antagonistic cultural meanings – a state that interferes with the normal convergence of the members of culturally diverse citizens of the Liberal Republic of Science on what is known to science.


"Another country heard from": a German take on cultural cognition

Anyone care to translate? (I did study German in college, but I've retained only tourist-essential phrases such as, "HaltSie sind verhaftet!" "Hände hoch oder Ich schieße!" etc.)

Also, is the idiom "another country heard from" still in common usage? Probably something people say only when they mean it to remark that someone who really is from another country is saying something -- & of course that's not really the occasion for it (& I certainly don't mean to be expressing the attitude here that my grandmother did when she would say it about some intervention of mine into a dinner table debate!).



Still more on the political sensitivity of model recalibration

Larry placed this in the comment thread for last post on this particular topic (a few back) but I am "upgrading" it so that it doesn't get overlooked & so debate/discussion can continue if there's interest. In response to last line of Larry's report -- a bet on the river, essentially -- I check raise with an older post from Revkin!

Larry says:

Late, but still pertinent, here's Judith Curry's own scholarly rejoinder, including Mann/Nucitelli, the Economist, and a variety of other papers on both sides of the climate sensitivity issue -- her synthesis:

Mann and Nuccitelli state:

"When the collective information from all of these independent sources of information is combined, climate scientists indeed find evidence for a climate sensitivity that is very close to the canonical 3°C estimate. That estimate still remains the scientific consensus, and current generation climate models — which tend to cluster in their climate sensitivity values around this estimate — remain our best tools for projecting future climate change and its potential impacts."

The Economist article stated:

"If climate scientists were credit-rating agencies, climate sensitivity would be on negative watch. But it would not yet be downgraded."

The combination of the articles by Schlesinger, Lewis, and Masters (not mentioned in the Economist article) add substantial weight to the negative watch.

In support of estimates on the high end, we have the Fasullo and Trenberth paper, which in my mind is refuted by the combination of the Olson et al., Tung and Zhou, and Klocke et al. papers. If a climate model under represents the multidecadal modes of climate variability yet agrees well with observations during a period of warming, then it is to be inferred that the climate model sensitivity is too high.

That leaves Jim Hansen’s as yet unpublished paper among the recent research that provides support for sensitivity on the high end.

On the RealClimate thread, Gavin made the following statement:

"In the meantime, the ‘meta-uncertainty’ across the methods remains stubbornly high with support for both relatively low numbers around 2ºC and higher ones around 4ºC, so that is likely to remain the consensus range."

In weighing the new evidence, especially improvements in the methodology of sensitivity analysis, it is becoming increasing difficult not to downgrade the estimates of climate sensitivity.

And finally, it is a major coup for the freelance/citizen climate scientist movement to see Nic Lewis and Troy masters publish influential papers on this topic in leading journals.

Should indicate, if nothing else, that debate over this significant point continues, and that climate ideologues committed to heightening alarm in order to achieve political (and these days often financial) ends indeed have cause for concern.


Oh yeah? Well, consider what the sagacious science writer Andy Revkin says. I think he is seeing more clearly than the climate-policy activists who seem to view the debate featured in the Economist article as putting them in a bad spot. He concludes that if sensitivity is recalibrated to reflect over-estimation, the message is simply, "hey, there's more time to try to work this problem out ... phew!" So my sense of puzzlement continues.


Some Q & A on the "cultural cognition scales"

Below is part of an email exchange that I thought might of interest to others:

Q.  How do you conceptualize the attitudes being assessed by the cultural cogniton scales?  Do you think of them as inherent personality dispositions that color an individual's opinions across all sorts of issues?  Do people hold different orientations depending on the issue?  Also, are they changeable over time, and if so, what sources of influence do you think are most relevant?

My answers:

a.  The items that the scales comprise are indicators of some latent disposition that generates individual differences in perceptions of risk and related facts. The theory I see "cultural cognition" as testing is that individuals form perceptions of risk & related facts in a  manner that protects the status of and their standing in groups important to their well-being, materially & psychologically. This makes cultural cognition a species of "identity protective" cognition, a phenomenon one can observe w/ respect to all manner of group identities.  If "identity protective cognition" is what creates variance in -- and conflict over-- risks and related facts that admit of scientific examination, then one would like to have some way to specify what the operative group identities are & have some observable measure of them (since the identities themselves *can't* be observed, are "latent" in that sense).  The "group-grid" framework as we conceive of it specifies the nature of the groups & thus supplies the constructs that we try to measure w/ the scales.  Presumably, too, there are lots of other potential indicators, including demographic characteristics, behaviors, other attitudes, etc.  The scales we use are tractable & robust & so we are satisfied w/ them.  

b.  The identities they measure are *dispositional*-- not "situational"; so, they reside in people & are constant across contexts. Relatively stable too across time, although there's no reason why individuals can't shift & change w/ respect to them -- it's aggregate patterns of perceptions among individuals that we are trying to measure, so the history of particular individuals isn't so important so long as it's not the case that all individuals are always in flux (in which case we'd not be explaining the phenomenon that we *see* in the world, which involves identifiable groups of people, not a kaleidoscopic blur of conflict among groups whose members are constantly changing, much less changing as those individuals move from place to place!).  

c.  The dispositions necessarily exist independently of the risk or fact dispositions they are explaining--else they would not be explanations of them at all but rather part of what we are trying to explain.  Compare a hypothetical approach that simply categorized people as "the low perception of risk x group," "the medium perception of risk x group," and the "high perception of risk group"; that would not be useful, at least for what we want to do--viz., explain why people who have different group identities disagree about risk!  Accordingly, there has to be some historically exogenous event that creates the connection (in our theory, something that invests particular risk or fact perceptions with meanings that link them to group identities).  This means, too, that *not all* risk perceptions (or related beliefs) will vary in manners that correspond to these identities, since not all putative risk sources will have become invested with meanings that make positions on them markers of identity in this sense. 

d.   Also  the groups are in fact models! They are representations of things that are no doubt much more complicated & varied in reality. They help to make unobservable, complex things tractable so that it becomes possible to explain, predict, and form prescriptions (or at least possible to go about the task of trying to do so through the use of valid empirical means of investigation).  Their utility will be specific, moreover, to the task of explaining, predicting & forming prescriptions to some specified set of risk perceptions.  They might not have as much utility as some other "model" of what the motivating dispositions are if one is investigating something else, or something more particular.  E.g., perceptions of synthetic biology risks, or dispositions relevant to how people might understand issues relating to climate adaptation in Fla, or "who watches science documentaries & why."

e.   Beyond that, I find the task of characterizing the thing we are measuring --are they "traits" a "values" "dispositions"? etc -- as scholastic & aimless, although I know this question matters to some scholars in some perfectly interesting conversation.  If someone explains to me why it matters for the conversation I am in to be able to characterize the dispositions in one of these ways rather than another, I will be motivated to figure out the answer (indeed, without a "why" I don't know "what" I am supposed to be figuring out).

Some relevant things:

Kahan, D. M. (2012). Cultural Cognition as a Conception of the Cultural Theory of Risk. In R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson (Eds.), Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (pp. 725-760): Springer London, Limited. 

Kahan, D. M. (2011). The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev., 126, 1.-77, pp. 19-24. 

Who *are* these guys? Cultural cognition profiling, part 1

Who *are* these guys? Cultural cognition profiling, part 2

Cultural vs. ideological cognition, part 3

Cultural vs. ideological cognition, part 2

Cultural vs. ideological cognition, part 1

Politically nonpartisan folks are culturally polarized on climate change

What generalizes & what doesn't? Cross-cultural cultural cognition part 1

"Tragedy of the Science-Communication Commons" (lecture summary, slides)


What should science communicators communicate about sea level rise?

The answer is how utterly normal it is for all sorts of people in every walk of life to be concerned about it and to be engaged in the project of identifying and implementing sensible policies to protect themselves and their communities from adverse impacts relating to it.

That was msg I tried to communicate (constrained by the disability I necessarily endure, and impeded by the misunderstandings I inevitably and comically provoke, on account of my being someone who only studies rather than does science communication) in my presentation at a great conference on sea level rise at University of California, Santa Barbara. Slides here.

There were lots of great talks by scientists & science communicators. Indeed, on my panel was the amazing science documentary producer Paula Apsell, who gave a great talk on how NOVA has covered climate change science over time.

As for my talk & my “communicate normality” msg, let me explain how I set this point up.

I told the audience that I wanted to address “communicating sea level rise” as an instance of the “science communication problem” (SCP). SCP refers to the failure of widely available, valid scientific evidence to quiet political conflict over issues of risk and other related facts to which that evidence directly speaks. Climate change is a conspicuous instance of SCP but isn’t alone: there’s nuclear power, e.g., the HPV vaccine, GM foods in Europe (maybe but hopefully not someday in US), gun control, etc. Making sense of and trying to overcome SCP is the aim of the “science of science communication,” which uses empirical methods to try to understand the processes by which what’s known to science is made known to those whose decisions it can helpfully inform.

The science of science communication, I stated, suggests that the source of SCP isn’t a deficit in public rationality. That’s the usual explanation for it, of course. But using the data from CCP’s Nature Climate Change study to illustrate, I explained that empirical study doesn’t support the proposition that political conflict over climate change or other societal risks is due to deficiencies in the public’s comprehension of science or on its over-reliance on heuristic-driven forms of information processing.

What empirical study suggests is the (or at least one hugely important) source of SCP is identity-protective cognition, the species of motivated reasoning that involves forming perceptions of fact that express and reinforce one’s connection to important affinity groups. The study of cultural cognition identifies the psychological mechanisms through which this process operates among groups of people who share the “cultural worldviews” described by Mary Douglas’s group-grid scheme. I reviewed studies—including Goebbert et al.’s one on culturally polarized recollections of recent weather—to illustrate this point, and explained too that this effect, far from being dissipated, is magnified by higher levels of science literacy and numeracy.

Basically, culturally diverse people react to evidence of climate change in much the way that fans of opposing sports teams do to disputed officiating calls.

Except they don’t, or don’t necessarily, when they are engaged in deliberations on adaptation. I noted (as I have previously in this blog) the large number of states that are either divided on or hostile about claims of human-caused global warming that are nonetheless hotbeds of collective activity focused on counteracting the adverse impacts of climate change, including sea level rise.

Coastal states like Florida, Louisiana, Virginia, and the Carolinas, as well as arid western ones like Arizona, Nevada, California, and New Mexico have all had “climate problems” for as long as human beings have been living in them. Dealing with such problems in resourceful, resilient, and stunningly successful ways is what the residents of those states do all the time.

As a result, citizens who engage national “climate change” policy as members of opposing cultural groups naturally envision themselves as members of the same team when it comes to local adaptation.  

click me!I focused on primarily on Florida, because that is the state whose adaptation activities I have become most familiar, as a result of my participation in ongoing field studies.

Consistent with Florida's Community Planning Act enacted in 2011, state municipal planners—in consultation with local property owners, agricultural producers, the tourism industry, and other local stakeholders—have devised a set of viable options, based on the best available scientific evidence, for offsetting the challenges that continuing sea level rise poses to the state.

All they are doing, though, is what they always have done and are expected to do by their constituents.  It’s the job of municipal planners in that state —one that they carry out with an awe-inspiring degree of expertise, including scientific acumen of the highest caliber--to make what’s known to science known to ordinary Floridians so that Floridians can use that knowledge to enjoy a way of life that has always required them to act wisely in the face of significant environmental challenges.

All the same, the success of these municipal officials is threatened by an incipient science communication problem of tremendous potential difficulty.

Effective collective action inevitably involves identifying and enforcing some set of reciprocal obligations in order to maximize the opportunity for dynamic, thriving, self-sustaining, and mutually enriching forms of interaction among free individuals. Some individuals will naturally oppose whatever particular obligations are agreed to, either because they expect to realize personal benefits from perpetuation of conditions inimical to maximizing the opportunities for profitable interactions among free individuals, or because they prefer some other regime of reciprocal obligation intended to do the same. This is normal, too, in democratic politics within liberal market societies.

But in states like Florida, those actors will have recourse to a potent—indeed, toxic—rhetorical weapon: the antagonistic meanings that pervade the national debate over climate change. If they don’t like any of the particular options that fit the best available evidence on sea level rise, or don’t like the particular ones that they suspect a majority of their fellow citizens might, they can be expected to try to stigmatize the municipal and various private groups engaged in adaptation planning by falsely characterizing them and their ideas in terms that bind them to only one of the partisan cultural styles that is now (sadly and pointlessly, as a result of misadventure, strategic behavior, and ineptitude) associated with engagement with climate change science in national politics.  Doing so, moreover, will predictably reproduce in local adaptation decisionmaking the motivated reasoning pathology—the “us-them” dynamic in which people react to scientific evidence like Red Sox and Yankees fans disputing an umpire’s called third strike—that now enfeeble national deliberations.

This is happening in Florida. I shared with the participants in the conference select bits and pieces of this spectacle, including the insidious “astroturf” strategy that involves transporting large groups of very not normal Floridians from one to another public meeting to voice their opposition to adaptation planning, which they describe as part of a "United Nations" sponsored "global warming agenda," the secret aim of which is to impose a "One-World, global, Socialist" order run by the "so-called Intelligentsia" etc. As divorced as their weird charges are from the reality of what’s going on, they have managed to harness enough of the culturally divisive energy associated with climate change to splinter municipal partnerships in some parts of the state, and stall stake-holder proceedings in others.

Let me clear here too. There are plenty of serious, intelligent, public-spirited people arguing over the strength and implications of evidence on climate change, not to mention what responses make sense in light of that evidence. You won’t find them within 1,000 intellectual or moral miles of these groups.

Preventing the contamination of the science communication environment by those trying to pollute it with cultural division--that's the science communication problem that is of greatest danger to those engaged in promoting constructive democratic engagement with sea level rise. 

The Florida planners are actually really really good at communicating the content of the science.  They also don’t really need help communicating the stakes, either; there’s no need to flood Florida with images of hurricane-flattened houses,  decimated harbor fronts, and water-submerged automobiles, since everyone has seen all of that first hand!

What the success of the planners’ science communication will depend on, though, is their ability to make sure that ordinary people in Florida aren’t prevented from seeing what the ongoing adaptation stakeholder proceedings truly are: a continuation of the same ordinary historical project of making Florida a captivating, beautiful place to live and experience, and hence a site for profitable human flourishing, notwithstanding the adversity that its climate poses—has always posed, and had always been negotiated successfully through creative and  cooperative forms of collective action by Floridians of all sorts.

They need to see, in other words, that responding to the challenge of sea level rise is indeed perfectly normal.

They need to see—and hence be reassured by the sight of—their local representatives, their neighbors, their business leaders, their farmers, and even their utility companies and insurers all working together.  Not because they all agree about what’s to be done—why in the world would they?! reasoning, free, self-governing people will always have a plurality of values, and interests, and expectations, and hence a plurality of opinions about what should be done! reconciling and balancing those is what democracy is all about!—but because they accept the premise that it is in fact necessary to do things about the myriad hazards that rising sea levels pose (and always have; everyone knows the sea level has been rising in Florida and elsewhere for as long as anyone has lived there) if one wants to live and live well in Florida.

What they most need to see, then, is not more wrecked property or more time-series graphs, but more examples of people like them—in all of their diversity—working together to figure out how to avert harms they are all perfectly familiar with.  There is a need, moreover, to ramp up the signal of the utter banality of what’s going on there because in fact there is a sad but not surprising risk otherwise that the noise of cultural polarization that has defeated reason (among citizens of all cultural styles, on climate change and myriad other contested issues) will disrupt and demean their common project to live as they always have.

I don’t do science communication, but I do study it. And while part of studying it scientifically means always treating what one knows as provisional and as subject to revision in light of new evidence, what I believe the best evidence from science communication tells us is that the normality of dealing with sea level and other climate impacts is the most important thing that needs to be communicated to memebers of the public in order to assure that they engage constructively with the best available evidence on climate science.

So go to Florida. Go to Virginia, to North and South Carolina, to Louisiana. Go to Arizona. Go to Colorado, to Nevada, New Mexico, and California. Go to New York, Connecticut and New Jersey.

And bring your cameras and your pens (keyboards!) so you can tell the story—the true story—in vivid, compelling terms (I don’t do science communication!) of ordinary people doing something completely ordinary and at the same time completely astonishing and awe-inspiring.

I’ll come too. I'll keep my mouth shut (seriously!) and try to help you collect & interpret the evidence that you should be collecting to help you make the most successful use of your craft skills as communicators in carrying out this enormously important mission.








A scholarly rejoinder to the Economist article 

Dana Nuccitelli & Michael Mann have posted a response to the Economist story on climate scientists' assessment of the performance of surface-temperature models. I found it very interesting and educational -- and also heartening.

The response is critical. N&M think the studies the Economist article reports on, and the article's own characterization of the state of the scientific debate, are wrong.

But from start to end, N&M engage the Economist article's sources -- studies by climate scientists engaged in assessing the performance of forecasting models over the last decade -- in a scholarly way focused on facts and evidence.  Actually one of the articles that N&M rely on -- a paper in Journal of Geophysical Research suggesting that temperatures may have been moderated by greater deep-ocean absorption of carbon  -- was featured prominently in the Economist article, which also reported on the theory that volcanic eruptions might also have contributed, another N&M point.

This is all in the nature of classic "conjecture & refutation"--the signature form of intellectual exchange in science, in which knowledge is advanced by spirited interrogation of alternative evidence-grounded inferences. It's a testament to the skill of the Economist author as a science journalist (whether or not the 2500-word story "got it right" in every detail or matter of emphasis) that in the course of describing such an exchange among scientists he or she ended up creating a modest example of the same, and thus a testament, too, to the skill & public spirit of N&M that they responded as they did, enabling curious and reflective citizens to form an understanding of a complex scientific issue.

Estimating  the impact of the Economist article on the "science communication environment"  is open to a degree of uncertainty even larger than that surrounding the impact of CO2 emissions on global surface temperatures. 

But my own "model" (one that is constantly & w/o embarrassment being calibrated on the basis of any discrepancy between prediction & observation) forecasts a better less toxic, reaction when thoughtful critics respond with earnest, empirics-grounded counterpoints (as here) rather than with charged, culturally evocative denunciations.

The former approach genuinely enlightens the small fraction of the population actually trying to understand the issues (who of course will w/ curiosity and an open mind read & consider responses offered in the same spirit). The latter doesn't; it only adds to the already abundant stock of antagonistic cultural resonances that polarize the remainder of the population, which is tuned in only to the "us-them" signal being transmitted by  the climate change debate.

Amplifying that signal is the one clear mistake for any communicator who wants to promote constructive engagement with climate science. 


Is ideologically motivated reasoning rational? And do only conservatives engage in it?

These were questions that I posed in a workshop I gave last Thurs. at Duke University in the political science department. I’ll give my (provisional, as always!) answers after "briefly" outlining the presentation (as I remember it at least). Slides here.

1. What is ideologically motivated reasoning?

It’s useful to start with a simple Bayesian model of information processing—not b/c it is necessarily either descriptively accurate (I’m sure it isn’t!) or normatively desirable (actually, I don’t get why it wouldn’t be, but seriously, I don’t want to get into that!) but b/c it supplies a heuristic benchmark in relation to which we can identify what is distinctive about any asserted cognitive dynamic.

Consider “confirmation bias” (CB).  In a simple Bayesian model, when an individual is exposed to new information or evidence relating to some factual proposition (say, that global warming is occurring; or that allowing concealed click me!possession of firearms decreases violent crime), she revises (“updates”) her prior estimation of the probability of that proposition in proportion to how much more consistent the new information is with that proposition being true than with it being false (“the likelihood ratio” of the new evidence). Her reasoning displays CB when instead of revising her prior estimate based on the weight of the evidence so understood, she selectively searches out and assigns weight to the evidence based on its consistency with her prior estimation. (In that case, the “likelihood ratio” is endogenous to her “priors.”)  If she does this, she’ll get stuck on an inaccurate estimation of the probability of the proposition despite being exposed to evidence that the estimate is wrong.

NO! CLICK ME!!!!!!!Motivated reasoning (MR) (at least as I prefer to think of it) refers to a tendency to engage information in a manner that promotes some goal or interest extrinsic to forming accurate beliefs. Thus, one searches out and selectively credits evidence based on the congeniality of it to that extrinsic goal or interest. Relative to the Bayesian model, then, we can see that goal or interest—rather than criteria related to accuracy of belief—as determining the “weight” (or likelihood ratio) to be assigned to new evidence related to some proposition.

MR might often look like CB. Individuals displaying MR will tend to form beliefs congenial to the extrinsic or motivating goal in question, and thereafter selectively seek out and credit information consistent with that goal. Because the motivating goal is determining both their priors and their information processing, it will appear as if they are assigning weight to information based on its consistency with their priors. But the relationship is in fact spurious (priors and likelihood ratio are not genuinely endogenous to one another).

“Ideologically motivated reasoning” (IMR), then, is simply MR in which some ideological disposition (say, “conservativism” or “liberalism”) supplies the motivating goal or interest extrinsic to formation of accurate beliefs.  Relative to a Bayesian model, then, individuals will search out information and selectively credit it conditional on its congeniality to their ideological dispositions. They will appear to be engaged in “confirmation bias” in favor of their ideological commitments. They will be divided on various factual propositions—because their motivating dispositions, their ideologies, are heterogeneous. And they will resist updating beliefs despite the availability of accurate information that ought to result in the convergence of their respective beliefs. 

In other words, they will be persistently polarized on the status of policy relevant facts.

 2. What is the cultural cognition of risk?

I couldn't care less if you click me...The cultural cognition of risk (CCR) is a form of motivated reasoning. It posits that individuals hold diverse predispositions with respect to risks and like facts.  Those predispositions—which can be characterized with reference to Mary Douglas’s “group grid” framework—motivate them to seek out and selectively credit information consistently with those predispositions. Thus, despite the availability of compelling scientific information, they end up in a state of persistent cultural polarization with respect to those facts.

The study of CCR is dedicated primarily to identifying the discrete psychological mechanisms through which this form of MR operates. These include “culturally biased information search and assimilation”; “the cultural credibility heuristic”; “cultural identity affirmation”; and the “cultural availability heuristic.”

These mechanisms do not result in confirmation bias per se.  CCR, as a species of MR, describes the influences that connect information processing to an extrinsic motivating goal or interest. Often—maybe usually even—those influences will conform information processing to inferences consistent with a person’s priors, which will also reflect his or her motivating cultural predisposition. But CCR makes it possible to understand how individuals might be motivated to assess information about risk in a directionally biased fashion even when they have no meaningful priors (b/c, say, the risk in question is a novel one, like nanotechnology) or in a manner contrary to their priors (b/c, say, the information, while contrary to an existing risk perception, is presented in an identity-affirming manner).

Recent research has focused on whether CCR is a form of heuristic-driven or “system 1” reasoning. The CCP Nature Climate Science study suggests that the answer is no. The measures of science comprehension in that study are associated with use of systematic or analytic “system 2” information processing. And the study found that as science comprehension increases, so does cultural polarization.

This conclusion supports what I call the “expressive rationality thesis.” The expressive rationality thesis holds that it CCR is rational at the individual level.

CCR is not necessarily conducive to formation of accurate beliefs under conditions in which opposing cultural groups are polarized.  But the “cost,” in effect, of persisting in a factually inaccurate view is zero; because an ordinary individual’s behavior—as, say, consumer or voter or participant in public debate—is too small to make a difference on climate change policy (let’s say), no action she takes on the basis of a mistaken belief about the facts will increase the risk she or anyone else she cares about faces.

The cost of forming a culturally deviant view on such a matter, however, is likely to be “high.” When positions on risk and like facts become akin to badges of membership in and loyalty to important affinity groups, forming the wrong ones can drive a wedge between individuals and others on whom they depend for support—material, emotional, and otherwise.

It therefore makes sense—is rational—for them to attend to information in issues like that (issues needn’t be that way; shouldn’t be allowed to become that way—but that’s another matter) in a manner that reliably aligns their beliefs with the ones that dominate in their group. One doesn’t have to have a science Ph.D. to do this. But if one does have a higher capacity to make sense of technical information, one can be expected to use that capacity to assure an even tighter fit between beliefs and identity—hence the magnification of cultural polarization as science comprehension grows.

3. Ideology, motivated reasoning & cognitive reflection

The “Ideology, motivated reasoning & cognitive reflection” experiment ) (IMRCR) picks up at this point in the development of the project to understand CCR.  The Nature Climate Change study was observational (correlation), and while it identified patterns of risk perception more consistent with CCR than alternative theories (ones focusing on popular deficiencies in system 2 reasoning, in particular), the results were still compatible with dynamics other than “expressive rationality” as I’ve described it.  The IMRCR study uses experimental means to corroborate the “expressive rationality” interpretation of the Nature Climate Change study data.

It also does something else.  As we have been charting the mechanisms of CCR, other researchers and commentators have advanced an alternative IMR (ideologically motivated reasoning) position, which I’ve labeled the “asymmetry thesis.” The asymmetry thesis attributes polarization over climate change and other risks and facts that admit of scientific investigation to the distinctive vulnerability of conservatives to IMR. Some (like Chris Mooney) believe the CCR results are consistent with IMR; I think they are not but that they really haven’t been aimed at testing the asymmetry thesis. 

The IMRCR study was designed to address that issue more directly, too. Indeed, I used ideology and party affiliation—political orientation—rather than cultural predisposition as the hypothesized motivating influence for information processing in the experiment to make the results as commensurable as possible with those featured in studies relied upon by proponents of the asymmetry thesis. In fact, I see political orientation variables as simply an alternative indicators of the same motivating disposition that cultural predispositions measure; I think the latter are better, but for present purposes political was sufficient (I can reproduce the data with cultural outlooks and get stronger results, in fact).

In the study, I find that political orientations exert a symmetrical impact on information processing. That is, “liberals” are as disposed as “conservatives” to assign weight to evidence based on the congeniality of crediting that evidence to their ideological predispositions (in other words, to assign a likelihood ratio to it that fits their goal to “express” their group commitments).

In addition, for both the effect is magnified by higher “cognitive reflection” scores.  This result is consistent with—and furnishes experimental corroboration of—the “expressive rationality” interpretation of the Nature Climate Change study.

4. So—“is ideologically motivated reasoning rational? And do only conservatives engage in it?”

The answer to the second question—only conservatives?—is I think “no!”

I didn’t expect a different answer before I did the IMRCR experiment. First, I regarded the designs and measures used in studies that were thought to support the “asymmetry thesis” as ill-suited for testing it. Second, to me the theory for the “asymmetry thesis” didn’t make sense; the motivation that I think it is most plausible to see as generating polarization of the sort measured by CCR is protection of one’s membership and status within an important affinity group—and the sorts of groups to which that dynamic applies are not confined to political ones (people feel them, and react accordingly with respect to, their connection to sports teams and schools). So why expect only conservatives to experience IMR??

But the way to resolve such questions is to design valid studies, make observations, and draw valid inferences.  I tried to that with the IMRCR study, and came away believing more strongly that IMR is symmetric across the ideological spectrum and CCR symmetric across cultural spectra.  Show me more evidence and (I hope) I will assign it the weigh (likelihood ratio) it is due and revise my position accordingly.

The answer to the second question—is IMR rational—is, “It depends!”  The result of the IMRCR study supported the “expressive rationality” hypothesis, which, in my mind, makes even less supportable than it was before the hypothesis that IMR is a consequence of heuristic-driven, bias prone “system 1 reasoning.”

But to say that IMR is “expressively rational” and therefore “rational” tout court is unsatisfying to me. For one thing, as emphasized in the Nature Climate Change paper and the IMRCR paper, even if it is individually rational for individuals to form their perceptions of a disputed risk issue in a way that protects their connection to their cultural or ideological affinity groups, it can be collectively disastrous for them to do that simultaneously, because in that circumstance democratically accountable actors will be less likely to converge on evidence relevant to the common interests of culturally diverse groups. We can say in this regard that what is expressive rational at the individual level is collectively irrational.  This makes CCR part of a collective action problem that demands an appropriate collective action solution.

In addition, I don’t think it is possible, in fact, to specify whether any form of cognition is “rational” without an account of whether it conduces or frustrates the ends of the person who displays it.  A person might find MR that projects his or her identity as a sports fan, e.g., to be very welcome—and yet regard MR (or even the prospect that it might be influencing her) totally unacceptable  if she is to be a referee.  I think people would generally be disturbed if they understood that as jurors in a case like the one featured in They Saw a Protest they were perceiving facts relevant to other citizens’ free speech rights in a way that reflected IMR.

Maybe some people would find it unsatisfying to learn that CCR or IMR is influencing how they are forming their perceptions of facts on issues like climate change or gun control, too? I bet they would be very distressed to discover that their assessments of risk were being influenced by CCR if they were parents deciding whether the HPV vaccine is good or bad for the health of their daughter.

Chris Johnston's book The Ambivalent Partisan is very relevant in this respect. Chris and his co-authors purport to find a class of citizens who don’t display the form of IMR (or CCR, I presume) that I believe I am measuring in the IMRCR paper.  They see them as ideally virtuous citizens. It is hard to disagree.  And hence it is confusing for me to know what to think about the significance of thing that I think (or thought!) I understood.  So I need to think more. Good!