follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


"Yes we can--with more technology!" A more hopeful narrative on climate?

Andy Revkin (the Haile Gebrselassie of environmental science journalism) has posted a guest-post on his blog by Peter B. Kelemen, the Arthur D. Storke Professor and vice chair in the Department of Earth and Environmental Sciences at Columbia University.

The essay combines two themes, basically.

One is the "greatest-thing-to-fear-is-fear-itself" claim: apocalyptic warnings are paralyzing and hence counterproductive; what's needed to motivate people is "hope."

That point isn't developed that much in the essay but is a familiar one in risk communication literature -- and is often part of the goldilocks dialectic that prescribes "use of emotionally compelling images" but "avoidance of excessive reliance on emotional images" (I've railed against goldilocks many times; it is a pseudoscience story-telling alternative to the real science of science communication).

But the other theme, which is the predominant focus and which strikes me as really engaging and intriguing, is that in fact "apocalypse" is exceedingly unlikely given the technological resourcefulness of human beings.

We should try to figure out the impact of human behavior that generates adverse climate impacts and modify them with feasible technological alternatives that themselves avoid economic and like hardships, Kelemen argues. Plus, to the extent that we decide to continue in engaging in behavior that has adverse impacts, we should anticipate that we will also figure out technological means of offsetting or dealing with the impacts. 

Kelemen focuses on carbon capture, gas-fired power plants, etc.

The policy/science issues here are interesting and certainly bear discussion.

But what captures my interest, of course, is the "science communication" significance of the "yes we can--with more technology" theme.  Here are a couple of points about it:

1. This theme is indeed likely to be effective in promoting constructive engagement with the best evidence on climate change.  The reason isn't that it is "hopeful" per se but that it avoids antagonistic meanings that trigger reflexive closed-mindedness on the part of individuals--a large segment of the population, in fact-- who attach high cultural value to human beings' technological resourcefulness and resilience.

from Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).CCP has done two studies on how making technological responses to climate change --such as greater reliance on nuclear power and exploration of geoengineering -- more salient helps to neutralize dismissive engagement with and thus reduce polarization over climate science.

These studies, by the way, are not about how to make people believe particular propositions or support particular policies (I don't regard that as "science communication" at all, frankly).  The outcome measures involve how reflectively and open-mindedly subjects assess scientific evidence.

2. Nevertheless, the "yes we can--with technology" theme is also likely to generate a push-back effect. The fact is that "apocalyptic" messaging doesn't breed either skepticism or disengagement with that segment of the population that holds egalitarian and communitarian values. On the contrary, it engages and stimulates them, precisely because (as Douglas & Wildavsky argue) it is suffused with cultural meanings that fit the moral resentment of markets, commerce, and industry.

For exactly this reason, individuals with these cultural dispositions predictably experience a certain measure of dissonance when technological "fixes" for climate impacts are proposed: "yes we can--with technology" implies that the solution to the harms associated with too much commerce, too great a commitment to markets, too much industrialization etc is not "game over" but rather "more of the same."  

Geoengineering and the like are "liposuction" when what we need is to go on a "diet."

How do these dynamics play out?

Well, of course, the answer is, I'm not really sure. 

But my conjecture is that the positive contribution of the "yes we can --with technology" narrative can make to promoting engagement with climate science will offset any push back effect. Most egalitarian communitarians are already plenty engaged with the issue of climate and are unlikely to "tune out" if technological responses other than carbon limits become an important part of the conversation.  There will be many commentators who conspicuously flail against this narrative, but their reactions are not a good indicator of how the "egalitarian communitarian" rank and file are likely to react. Indeed, pushing back too hard, in a breathless, panicked way will likely make such commentators appear weirdly zealous and thus undermine their credibility with the largely nonpartisan mass of citizens who are culturally disposed to take climate change seriously.

Or maybe not. As I said, this is a conjecture, a hypothesis.  The right way to figure the question out isn't to tell stories but rather to collect evidence that can help furnish an answer.


How many times do I have to explain?! "Facts" aren't enough, but that doesn't mean anyone is "lying"!

Receiving email like this is always extremely gratifying, of course, because it confirms for me that our "cultural cognition" research is indeed connecting with a large number of culturally diverse people. At the same time, it is frustrating to see how these readers fundamentally misunderstand our studies. I guess when you are so deeply caught up in a culturally contested question like this one, it is just really hard to get that screaming "the facts! the facts! Stop lying!!!," isn't going to promote constructive public engagement with the best available scientific evidence.



More US science literary data -- from Pew (an organization that definitely knows how to study US science attitudes)

The Pew Research Center has a new report out on US science attitudes & science knowledge.  I haven't read it yet but look forward too--when I get through a crunch of 4,239 other things--because Pew does great surveys generally & super great public opinion work on US public & science, a matter I've discussed before.

Maybe in the meantime one of the billions of generous, public-spirited, and insatiably curious (and opinionated!) readers of this blog will read carefully & report on contents for us.

Another thing I plan to get to, moreover, is the absurd "US is science illiterate/anti-science comparted to 'rest of developed world' meme."  Patently false.  Really interesting to try to figure out the source of the intense motivation to say and believe this...


"The qualified immunity bar is not set that low..."

Despite appearances, Scott v. Harris does not stand for the proposition that "reasonable" jurors are constrained to "see" any fleeing driver as a lethal risk against whom the police can necessarily apply deadly force.

Or so concludes a very reasonable jurist.



Still more Q & A on "cultural cognition scales" -- and on the theory behind & implications of them

I was starting to formulate a contribution to some of the great points made in discussion of the post on Q&A on "cultural cogniton" scales" & figured I might as well post the response. I encourage others to read the comments--you'll definitely learn more from them than from what I'm saying here, but maybe a marginal bit more still if you read my contribution in addition to those reflections. And almost certainly more still if others are moved by what I have to say here to refine and extend the arguments that were being presented in there.  Likely too it would make sense for the discussion to continue in comments to this post, if there is interest in continuing.

1. Whence predispositions, and the revision of them

How does this theory then explain the change from one group identity to another? You don't argue that such change doesn't occur, I see, since you say that there's "no reason why individuals can't shift & change w/ respect to them" -- but why isn't there such a reason, since you've given a good phenomenological description of the group pressures brought to bear on individuals to keep them in the herd, so to speak?

I don't really know how people form or why they change the sorts of affinity-group commitments that will result in sorts of dispositions we can measure w/ the cultural worldview scales.  My guess is that the answer is the same as one that one would give about why people form & change the sorts of orientations that are connected to religious identifications & ideological or political ones: social influences of various sorts, most importantly family & immediate community growing up; some possibility of realignment upon exposure at an impressionable period of life (more typically college age than adolescence or earlier) to new perspectives & new, compelling sources of affinity; thereafter usually nothing of interest, & lots of noise, but maybe some traumatic life experience etc.

Question I'd put back is: why is this important given what I am trying to do? I want to explain, predict, and formulate constructive prescriptions relating to conflict over science relevant to individual & collective decisionmaking. Knowing that the predispositions in question are important to that means it is important to be able to measure them.  But it doesn't mean, necessarily, that I need a good account of whence the predispositions, or of change -- so long as I can be confident (as I am) that they are relatively stable across the population. 

I suppose someone could say, "you should have a theory of the “whence & reformation of” predispositions b/c you might then be able to identify strategies for shaping them as a means of averting conflict/confusion over science" etc.  But I find that proposition (a) implausible (I think I know enough to know that regulating formation of such affinities is probably not genuinely feasible) & more importantly (to me) (b) a moral/political nonstarter: in a liberal society, it is not appropriate to make formation of people's values & self-defining affinities a conscious object of govt action.  On the contrary, it is one of the major aims of the "political science of democracy" (in Tocqueville's sense) to figure out how to make it possible for a community of diverse citizens to realize their common interest in knowing what's known without interfering with their diversity.

2. On change in how groups with particular predispositions engage or assess risks

And a related question would be: how do the group perceptions of risk themselves change over time? Ruling out mystical or telepathic bonds between group members, how does a change get started, who starts it, and how or where do those starters derive their perception of risk? (Consider, e.g., nuclear power.)

There is an account of this in "the theory." 

The "cultural cognition thesis" says that "culture is prior" -- cognitively speaking --" to facts."  That is, individuals can be expected to engage information in a manner that conforms understanding of facts to conclusions the cultural meanings of which are affirming to their cultural identities. 

So when a putative risk source -- say, climate change or guns or HPV or nuclear  or cigarettes-- becomes infused with antagonistic meanings, “pouring more information” on the conflagration won’t staunch it; it will likely only enflame

Instead, one must do something that alters the meanings, so that positions are no longer seen as uniquely tied to cultural identities.  At that point, people will not face the same psychic pressure that can induce them (all the more so when they are disposed to engage in analytical, reflective engagement with information!) to reject scientific evidence on any position in a closed-minded fashion.

Will groups change their minds, then? Likely someone will; or really, likely there will be convergence among persons with diverse views, since like all members of a liberal market society they share faculties for reliably recognizing the best available scientific evidence, and at that point those faculties no longer will be distorted or disabled by the sort of noise or pollution created by antagonistic cultural meanings.

Examples? For ones in the world, consider discussions (of cigarettes, of abortion in France, of air pollution in US, etc.) in these papers:

The Cognitively Illiberal State, 60 Stan. L. Rev. 115 (2007)

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk, 119 Harv. L. Rev.1071 (2006) (with Paul Slovic, John Gastil & Donald Braman)

Cultural Cognition and Public Policy, 24 Yale L. & Pol'y Rev. 149 (2006) (with Donald Braman)

For an experimental “model” of this process, see our paper on geoengineering & the “two-channel” science communication strategy:

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

And for more still on how knowing why there is cultural conflict can help to fashion strategies that dispel sources of conflict & enable convergence, see

Is cultural cognition a bummer? Part 1

3.  What about the “objective reality of risk” as opposed to the cultural cognition of it?

These questions themselves derive from a sense I have that the group-identity theory of risk perception is not wrong but incomplete, and the area in which it's incomplete is of major importance in addressing any theory of communication to do with risk -- that area is the objective reality of risk, as determined not by group adherence, and not by authority (even the authority of a science establishment), but rather by evidence and reason.

To start, of course the theory is “incomplete”; anyone who thinks that any theory ever is “complete” misunderstands science’s way of knowing! Also misunderstands something much more mundane—the limited ambition of what the ‘cultural cognition’ framework aspires to, which is a more edifying and empowering understanding of the “science communication problem,” which I think one can have w/o having much to say about many things of importance.

But the “theory” as it is does have a position, or least an attitude, about the “reality” of the knowledge confusion over which is the focus of the “science communication problem.”  The essence of the attitude comes down to this:

a. Science’s way of knowing—which treats as entitled to assent (and even that only provisionally) conclusions based on valid inference from valid empirical observation—is the only valid way to know the sorts of things that admit of this form of inquiry. (The idea that things that don’t admit of this form of inquiry can’t be addressed in a meaningful way at all is an entirely different claim and certainly not anything that is necessary for treating science’s way of knowing as authoritative within the domain of the empirically observable; personally, I find the claim annoyingly scholastic, and the people who make it simply annoying.)

b. People, individually & collectively, will be better off if they rely on the best available scientific evidence to guide decisions that depend on empirical assumptions or premises relating to how the world (including the social world) works.

c. In the US & other liberal democratic market societies—the imperfect instantiations of the Liberal Republic of Science as a political regime—people of all cultural outlooks in fact accept that science’s way of knowing is authoriative in this sense & also very much want to be guided by it in the way just specified.

d. Those who accept the authority of science & who want to be guided by it will necessarily have to accept as known by science much much more than they could ever hope to comprehend in a meaningful sense themselves. Thus their prospects for achieving their ends in these regards depends on their forming a reliable ability to recognize what’s known to science.  The citizens of the Liberal Republic of Science have indeed developed this faculty (and it is one that is very much a faculty that consists in the exercise of reason; it is an indispensable element of “rationality” to be able reliably to recognize who knows what about what).

e. The process of cultural cogniton, far from being a bias, is part of the recognition faculty that diverse individuals use reliably to recognize what is known by science.

f. The “science communication problem” is a consequence of conditions that disable the reliable exercise of this faculty.  Those conditions involve the entanglement of empirical propositions with antagonistic cultural meanings – a state that interferes with the normal convergence of the members of culturally diverse citizens of the Liberal Republic of Science on what is known to science.


"Another country heard from": a German take on cultural cognition

Anyone care to translate? (I did study German in college, but I've retained only tourist-essential phrases such as, "HaltSie sind verhaftet!" "Hände hoch oder Ich schieße!" etc.)

Also, is the idiom "another country heard from" still in common usage? Probably something people say only when they mean it to remark that someone who really is from another country is saying something -- & of course that's not really the occasion for it (& I certainly don't mean to be expressing the attitude here that my grandmother did when she would say it about some intervention of mine into a dinner table debate!).



Still more on the political sensitivity of model recalibration

Larry placed this in the comment thread for last post on this particular topic (a few back) but I am "upgrading" it so that it doesn't get overlooked & so debate/discussion can continue if there's interest. In response to last line of Larry's report -- a bet on the river, essentially -- I check raise with an older post from Revkin!

Larry says:

Late, but still pertinent, here's Judith Curry's own scholarly rejoinder, including Mann/Nucitelli, the Economist, and a variety of other papers on both sides of the climate sensitivity issue -- her synthesis:

Mann and Nuccitelli state:

"When the collective information from all of these independent sources of information is combined, climate scientists indeed find evidence for a climate sensitivity that is very close to the canonical 3°C estimate. That estimate still remains the scientific consensus, and current generation climate models — which tend to cluster in their climate sensitivity values around this estimate — remain our best tools for projecting future climate change and its potential impacts."

The Economist article stated:

"If climate scientists were credit-rating agencies, climate sensitivity would be on negative watch. But it would not yet be downgraded."

The combination of the articles by Schlesinger, Lewis, and Masters (not mentioned in the Economist article) add substantial weight to the negative watch.

In support of estimates on the high end, we have the Fasullo and Trenberth paper, which in my mind is refuted by the combination of the Olson et al., Tung and Zhou, and Klocke et al. papers. If a climate model under represents the multidecadal modes of climate variability yet agrees well with observations during a period of warming, then it is to be inferred that the climate model sensitivity is too high.

That leaves Jim Hansen’s as yet unpublished paper among the recent research that provides support for sensitivity on the high end.

On the RealClimate thread, Gavin made the following statement:

"In the meantime, the ‘meta-uncertainty’ across the methods remains stubbornly high with support for both relatively low numbers around 2ºC and higher ones around 4ºC, so that is likely to remain the consensus range."

In weighing the new evidence, especially improvements in the methodology of sensitivity analysis, it is becoming increasing difficult not to downgrade the estimates of climate sensitivity.

And finally, it is a major coup for the freelance/citizen climate scientist movement to see Nic Lewis and Troy masters publish influential papers on this topic in leading journals.

Should indicate, if nothing else, that debate over this significant point continues, and that climate ideologues committed to heightening alarm in order to achieve political (and these days often financial) ends indeed have cause for concern.


Oh yeah? Well, consider what the sagacious science writer Andy Revkin says. I think he is seeing more clearly than the climate-policy activists who seem to view the debate featured in the Economist article as putting them in a bad spot. He concludes that if sensitivity is recalibrated to reflect over-estimation, the message is simply, "hey, there's more time to try to work this problem out ... phew!" So my sense of puzzlement continues.


Some Q & A on the "cultural cognition scales"

Below is part of an email exchange that I thought might of interest to others:

Q.  How do you conceptualize the attitudes being assessed by the cultural cogniton scales?  Do you think of them as inherent personality dispositions that color an individual's opinions across all sorts of issues?  Do people hold different orientations depending on the issue?  Also, are they changeable over time, and if so, what sources of influence do you think are most relevant?

My answers:

a.  The items that the scales comprise are indicators of some latent disposition that generates individual differences in perceptions of risk and related facts. The theory I see "cultural cognition" as testing is that individuals form perceptions of risk & related facts in a  manner that protects the status of and their standing in groups important to their well-being, materially & psychologically. This makes cultural cognition a species of "identity protective" cognition, a phenomenon one can observe w/ respect to all manner of group identities.  If "identity protective cognition" is what creates variance in -- and conflict over-- risks and related facts that admit of scientific examination, then one would like to have some way to specify what the operative group identities are & have some observable measure of them (since the identities themselves *can't* be observed, are "latent" in that sense).  The "group-grid" framework as we conceive of it specifies the nature of the groups & thus supplies the constructs that we try to measure w/ the scales.  Presumably, too, there are lots of other potential indicators, including demographic characteristics, behaviors, other attitudes, etc.  The scales we use are tractable & robust & so we are satisfied w/ them.  

b.  The identities they measure are *dispositional*-- not "situational"; so, they reside in people & are constant across contexts. Relatively stable too across time, although there's no reason why individuals can't shift & change w/ respect to them -- it's aggregate patterns of perceptions among individuals that we are trying to measure, so the history of particular individuals isn't so important so long as it's not the case that all individuals are always in flux (in which case we'd not be explaining the phenomenon that we *see* in the world, which involves identifiable groups of people, not a kaleidoscopic blur of conflict among groups whose members are constantly changing, much less changing as those individuals move from place to place!).  

c.  The dispositions necessarily exist independently of the risk or fact dispositions they are explaining--else they would not be explanations of them at all but rather part of what we are trying to explain.  Compare a hypothetical approach that simply categorized people as "the low perception of risk x group," "the medium perception of risk x group," and the "high perception of risk group"; that would not be useful, at least for what we want to do--viz., explain why people who have different group identities disagree about risk!  Accordingly, there has to be some historically exogenous event that creates the connection (in our theory, something that invests particular risk or fact perceptions with meanings that link them to group identities).  This means, too, that *not all* risk perceptions (or related beliefs) will vary in manners that correspond to these identities, since not all putative risk sources will have become invested with meanings that make positions on them markers of identity in this sense. 

d.   Also  the groups are in fact models! They are representations of things that are no doubt much more complicated & varied in reality. They help to make unobservable, complex things tractable so that it becomes possible to explain, predict, and form prescriptions (or at least possible to go about the task of trying to do so through the use of valid empirical means of investigation).  Their utility will be specific, moreover, to the task of explaining, predicting & forming prescriptions to some specified set of risk perceptions.  They might not have as much utility as some other "model" of what the motivating dispositions are if one is investigating something else, or something more particular.  E.g., perceptions of synthetic biology risks, or dispositions relevant to how people might understand issues relating to climate adaptation in Fla, or "who watches science documentaries & why."

e.   Beyond that, I find the task of characterizing the thing we are measuring --are they "traits" a "values" "dispositions"? etc -- as scholastic & aimless, although I know this question matters to some scholars in some perfectly interesting conversation.  If someone explains to me why it matters for the conversation I am in to be able to characterize the dispositions in one of these ways rather than another, I will be motivated to figure out the answer (indeed, without a "why" I don't know "what" I am supposed to be figuring out).

Some relevant things:

Kahan, D. M. (2012). Cultural Cognition as a Conception of the Cultural Theory of Risk. In R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson (Eds.), Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (pp. 725-760): Springer London, Limited. 

Kahan, D. M. (2011). The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev., 126, 1.-77, pp. 19-24. 

Who *are* these guys? Cultural cognition profiling, part 1

Who *are* these guys? Cultural cognition profiling, part 2

Cultural vs. ideological cognition, part 3

Cultural vs. ideological cognition, part 2

Cultural vs. ideological cognition, part 1

Politically nonpartisan folks are culturally polarized on climate change

What generalizes & what doesn't? Cross-cultural cultural cognition part 1

"Tragedy of the Science-Communication Commons" (lecture summary, slides)


What should science communicators communicate about sea level rise?

The answer is how utterly normal it is for all sorts of people in every walk of life to be concerned about it and to be engaged in the project of identifying and implementing sensible policies to protect themselves and their communities from adverse impacts relating to it.

That was msg I tried to communicate (constrained by the disability I necessarily endure, and impeded by the misunderstandings I inevitably and comically provoke, on account of my being someone who only studies rather than does science communication) in my presentation at a great conference on sea level rise at University of California, Santa Barbara. Slides here.

There were lots of great talks by scientists & science communicators. Indeed, on my panel was the amazing science documentary producer Paula Apsell, who gave a great talk on how NOVA has covered climate change science over time.

As for my talk & my “communicate normality” msg, let me explain how I set this point up.

I told the audience that I wanted to address “communicating sea level rise” as an instance of the “science communication problem” (SCP). SCP refers to the failure of widely available, valid scientific evidence to quiet political conflict over issues of risk and other related facts to which that evidence directly speaks. Climate change is a conspicuous instance of SCP but isn’t alone: there’s nuclear power, e.g., the HPV vaccine, GM foods in Europe (maybe but hopefully not someday in US), gun control, etc. Making sense of and trying to overcome SCP is the aim of the “science of science communication,” which uses empirical methods to try to understand the processes by which what’s known to science is made known to those whose decisions it can helpfully inform.

The science of science communication, I stated, suggests that the source of SCP isn’t a deficit in public rationality. That’s the usual explanation for it, of course. But using the data from CCP’s Nature Climate Change study to illustrate, I explained that empirical study doesn’t support the proposition that political conflict over climate change or other societal risks is due to deficiencies in the public’s comprehension of science or on its over-reliance on heuristic-driven forms of information processing.

What empirical study suggests is the (or at least one hugely important) source of SCP is identity-protective cognition, the species of motivated reasoning that involves forming perceptions of fact that express and reinforce one’s connection to important affinity groups. The study of cultural cognition identifies the psychological mechanisms through which this process operates among groups of people who share the “cultural worldviews” described by Mary Douglas’s group-grid scheme. I reviewed studies—including Goebbert et al.’s one on culturally polarized recollections of recent weather—to illustrate this point, and explained too that this effect, far from being dissipated, is magnified by higher levels of science literacy and numeracy.

Basically, culturally diverse people react to evidence of climate change in much the way that fans of opposing sports teams do to disputed officiating calls.

Except they don’t, or don’t necessarily, when they are engaged in deliberations on adaptation. I noted (as I have previously in this blog) the large number of states that are either divided on or hostile about claims of human-caused global warming that are nonetheless hotbeds of collective activity focused on counteracting the adverse impacts of climate change, including sea level rise.

Coastal states like Florida, Louisiana, Virginia, and the Carolinas, as well as arid western ones like Arizona, Nevada, California, and New Mexico have all had “climate problems” for as long as human beings have been living in them. Dealing with such problems in resourceful, resilient, and stunningly successful ways is what the residents of those states do all the time.

As a result, citizens who engage national “climate change” policy as members of opposing cultural groups naturally envision themselves as members of the same team when it comes to local adaptation.  

click me!I focused on primarily on Florida, because that is the state whose adaptation activities I have become most familiar, as a result of my participation in ongoing field studies.

Consistent with Florida's Community Planning Act enacted in 2011, state municipal planners—in consultation with local property owners, agricultural producers, the tourism industry, and other local stakeholders—have devised a set of viable options, based on the best available scientific evidence, for offsetting the challenges that continuing sea level rise poses to the state.

All they are doing, though, is what they always have done and are expected to do by their constituents.  It’s the job of municipal planners in that state —one that they carry out with an awe-inspiring degree of expertise, including scientific acumen of the highest caliber--to make what’s known to science known to ordinary Floridians so that Floridians can use that knowledge to enjoy a way of life that has always required them to act wisely in the face of significant environmental challenges.

All the same, the success of these municipal officials is threatened by an incipient science communication problem of tremendous potential difficulty.

Effective collective action inevitably involves identifying and enforcing some set of reciprocal obligations in order to maximize the opportunity for dynamic, thriving, self-sustaining, and mutually enriching forms of interaction among free individuals. Some individuals will naturally oppose whatever particular obligations are agreed to, either because they expect to realize personal benefits from perpetuation of conditions inimical to maximizing the opportunities for profitable interactions among free individuals, or because they prefer some other regime of reciprocal obligation intended to do the same. This is normal, too, in democratic politics within liberal market societies.

But in states like Florida, those actors will have recourse to a potent—indeed, toxic—rhetorical weapon: the antagonistic meanings that pervade the national debate over climate change. If they don’t like any of the particular options that fit the best available evidence on sea level rise, or don’t like the particular ones that they suspect a majority of their fellow citizens might, they can be expected to try to stigmatize the municipal and various private groups engaged in adaptation planning by falsely characterizing them and their ideas in terms that bind them to only one of the partisan cultural styles that is now (sadly and pointlessly, as a result of misadventure, strategic behavior, and ineptitude) associated with engagement with climate change science in national politics.  Doing so, moreover, will predictably reproduce in local adaptation decisionmaking the motivated reasoning pathology—the “us-them” dynamic in which people react to scientific evidence like Red Sox and Yankees fans disputing an umpire’s called third strike—that now enfeeble national deliberations.

This is happening in Florida. I shared with the participants in the conference select bits and pieces of this spectacle, including the insidious “astroturf” strategy that involves transporting large groups of very not normal Floridians from one to another public meeting to voice their opposition to adaptation planning, which they describe as part of a "United Nations" sponsored "global warming agenda," the secret aim of which is to impose a "One-World, global, Socialist" order run by the "so-called Intelligentsia" etc. As divorced as their weird charges are from the reality of what’s going on, they have managed to harness enough of the culturally divisive energy associated with climate change to splinter municipal partnerships in some parts of the state, and stall stake-holder proceedings in others.

Let me clear here too. There are plenty of serious, intelligent, public-spirited people arguing over the strength and implications of evidence on climate change, not to mention what responses make sense in light of that evidence. You won’t find them within 1,000 intellectual or moral miles of these groups.

Preventing the contamination of the science communication environment by those trying to pollute it with cultural division--that's the science communication problem that is of greatest danger to those engaged in promoting constructive democratic engagement with sea level rise. 

The Florida planners are actually really really good at communicating the content of the science.  They also don’t really need help communicating the stakes, either; there’s no need to flood Florida with images of hurricane-flattened houses,  decimated harbor fronts, and water-submerged automobiles, since everyone has seen all of that first hand!

What the success of the planners’ science communication will depend on, though, is their ability to make sure that ordinary people in Florida aren’t prevented from seeing what the ongoing adaptation stakeholder proceedings truly are: a continuation of the same ordinary historical project of making Florida a captivating, beautiful place to live and experience, and hence a site for profitable human flourishing, notwithstanding the adversity that its climate poses—has always posed, and had always been negotiated successfully through creative and  cooperative forms of collective action by Floridians of all sorts.

They need to see, in other words, that responding to the challenge of sea level rise is indeed perfectly normal.

They need to see—and hence be reassured by the sight of—their local representatives, their neighbors, their business leaders, their farmers, and even their utility companies and insurers all working together.  Not because they all agree about what’s to be done—why in the world would they?! reasoning, free, self-governing people will always have a plurality of values, and interests, and expectations, and hence a plurality of opinions about what should be done! reconciling and balancing those is what democracy is all about!—but because they accept the premise that it is in fact necessary to do things about the myriad hazards that rising sea levels pose (and always have; everyone knows the sea level has been rising in Florida and elsewhere for as long as anyone has lived there) if one wants to live and live well in Florida.

What they most need to see, then, is not more wrecked property or more time-series graphs, but more examples of people like them—in all of their diversity—working together to figure out how to avert harms they are all perfectly familiar with.  There is a need, moreover, to ramp up the signal of the utter banality of what’s going on there because in fact there is a sad but not surprising risk otherwise that the noise of cultural polarization that has defeated reason (among citizens of all cultural styles, on climate change and myriad other contested issues) will disrupt and demean their common project to live as they always have.

I don’t do science communication, but I do study it. And while part of studying it scientifically means always treating what one knows as provisional and as subject to revision in light of new evidence, what I believe the best evidence from science communication tells us is that the normality of dealing with sea level and other climate impacts is the most important thing that needs to be communicated to memebers of the public in order to assure that they engage constructively with the best available evidence on climate science.

So go to Florida. Go to Virginia, to North and South Carolina, to Louisiana. Go to Arizona. Go to Colorado, to Nevada, New Mexico, and California. Go to New York, Connecticut and New Jersey.

And bring your cameras and your pens (keyboards!) so you can tell the story—the true story—in vivid, compelling terms (I don’t do science communication!) of ordinary people doing something completely ordinary and at the same time completely astonishing and awe-inspiring.

I’ll come too. I'll keep my mouth shut (seriously!) and try to help you collect & interpret the evidence that you should be collecting to help you make the most successful use of your craft skills as communicators in carrying out this enormously important mission.








A scholarly rejoinder to the Economist article 

Dana Nuccitelli & Michael Mann have posted a response to the Economist story on climate scientists' assessment of the performance of surface-temperature models. I found it very interesting and educational -- and also heartening.

The response is critical. N&M think the studies the Economist article reports on, and the article's own characterization of the state of the scientific debate, are wrong.

But from start to end, N&M engage the Economist article's sources -- studies by climate scientists engaged in assessing the performance of forecasting models over the last decade -- in a scholarly way focused on facts and evidence.  Actually one of the articles that N&M rely on -- a paper in Journal of Geophysical Research suggesting that temperatures may have been moderated by greater deep-ocean absorption of carbon  -- was featured prominently in the Economist article, which also reported on the theory that volcanic eruptions might also have contributed, another N&M point.

This is all in the nature of classic "conjecture & refutation"--the signature form of intellectual exchange in science, in which knowledge is advanced by spirited interrogation of alternative evidence-grounded inferences. It's a testament to the skill of the Economist author as a science journalist (whether or not the 2500-word story "got it right" in every detail or matter of emphasis) that in the course of describing such an exchange among scientists he or she ended up creating a modest example of the same, and thus a testament, too, to the skill & public spirit of N&M that they responded as they did, enabling curious and reflective citizens to form an understanding of a complex scientific issue.

Estimating  the impact of the Economist article on the "science communication environment"  is open to a degree of uncertainty even larger than that surrounding the impact of CO2 emissions on global surface temperatures. 

But my own "model" (one that is constantly & w/o embarrassment being calibrated on the basis of any discrepancy between prediction & observation) forecasts a better less toxic, reaction when thoughtful critics respond with earnest, empirics-grounded counterpoints (as here) rather than with charged, culturally evocative denunciations.

The former approach genuinely enlightens the small fraction of the population actually trying to understand the issues (who of course will w/ curiosity and an open mind read & consider responses offered in the same spirit). The latter doesn't; it only adds to the already abundant stock of antagonistic cultural resonances that polarize the remainder of the population, which is tuned in only to the "us-them" signal being transmitted by  the climate change debate.

Amplifying that signal is the one clear mistake for any communicator who wants to promote constructive engagement with climate science. 


Is ideologically motivated reasoning rational? And do only conservatives engage in it?

These were questions that I posed in a workshop I gave last Thurs. at Duke University in the political science department. I’ll give my (provisional, as always!) answers after "briefly" outlining the presentation (as I remember it at least). Slides here.

1. What is ideologically motivated reasoning?

It’s useful to start with a simple Bayesian model of information processing—not b/c it is necessarily either descriptively accurate (I’m sure it isn’t!) or normatively desirable (actually, I don’t get why it wouldn’t be, but seriously, I don’t want to get into that!) but b/c it supplies a heuristic benchmark in relation to which we can identify what is distinctive about any asserted cognitive dynamic.

Consider “confirmation bias” (CB).  In a simple Bayesian model, when an individual is exposed to new information or evidence relating to some factual proposition (say, that global warming is occurring; or that allowing concealed click me!possession of firearms decreases violent crime), she revises (“updates”) her prior estimation of the probability of that proposition in proportion to how much more consistent the new information is with that proposition being true than with it being false (“the likelihood ratio” of the new evidence). Her reasoning displays CB when instead of revising her prior estimate based on the weight of the evidence so understood, she selectively searches out and assigns weight to the evidence based on its consistency with her prior estimation. (In that case, the “likelihood ratio” is endogenous to her “priors.”)  If she does this, she’ll get stuck on an inaccurate estimation of the probability of the proposition despite being exposed to evidence that the estimate is wrong.

NO! CLICK ME!!!!!!!Motivated reasoning (MR) (at least as I prefer to think of it) refers to a tendency to engage information in a manner that promotes some goal or interest extrinsic to forming accurate beliefs. Thus, one searches out and selectively credits evidence based on the congeniality of it to that extrinsic goal or interest. Relative to the Bayesian model, then, we can see that goal or interest—rather than criteria related to accuracy of belief—as determining the “weight” (or likelihood ratio) to be assigned to new evidence related to some proposition.

MR might often look like CB. Individuals displaying MR will tend to form beliefs congenial to the extrinsic or motivating goal in question, and thereafter selectively seek out and credit information consistent with that goal. Because the motivating goal is determining both their priors and their information processing, it will appear as if they are assigning weight to information based on its consistency with their priors. But the relationship is in fact spurious (priors and likelihood ratio are not genuinely endogenous to one another).

“Ideologically motivated reasoning” (IMR), then, is simply MR in which some ideological disposition (say, “conservativism” or “liberalism”) supplies the motivating goal or interest extrinsic to formation of accurate beliefs.  Relative to a Bayesian model, then, individuals will search out information and selectively credit it conditional on its congeniality to their ideological dispositions. They will appear to be engaged in “confirmation bias” in favor of their ideological commitments. They will be divided on various factual propositions—because their motivating dispositions, their ideologies, are heterogeneous. And they will resist updating beliefs despite the availability of accurate information that ought to result in the convergence of their respective beliefs. 

In other words, they will be persistently polarized on the status of policy relevant facts.

 2. What is the cultural cognition of risk?

I couldn't care less if you click me...The cultural cognition of risk (CCR) is a form of motivated reasoning. It posits that individuals hold diverse predispositions with respect to risks and like facts.  Those predispositions—which can be characterized with reference to Mary Douglas’s “group grid” framework—motivate them to seek out and selectively credit information consistently with those predispositions. Thus, despite the availability of compelling scientific information, they end up in a state of persistent cultural polarization with respect to those facts.

The study of CCR is dedicated primarily to identifying the discrete psychological mechanisms through which this form of MR operates. These include “culturally biased information search and assimilation”; “the cultural credibility heuristic”; “cultural identity affirmation”; and the “cultural availability heuristic.”

These mechanisms do not result in confirmation bias per se.  CCR, as a species of MR, describes the influences that connect information processing to an extrinsic motivating goal or interest. Often—maybe usually even—those influences will conform information processing to inferences consistent with a person’s priors, which will also reflect his or her motivating cultural predisposition. But CCR makes it possible to understand how individuals might be motivated to assess information about risk in a directionally biased fashion even when they have no meaningful priors (b/c, say, the risk in question is a novel one, like nanotechnology) or in a manner contrary to their priors (b/c, say, the information, while contrary to an existing risk perception, is presented in an identity-affirming manner).

Recent research has focused on whether CCR is a form of heuristic-driven or “system 1” reasoning. The CCP Nature Climate Science study suggests that the answer is no. The measures of science comprehension in that study are associated with use of systematic or analytic “system 2” information processing. And the study found that as science comprehension increases, so does cultural polarization.

This conclusion supports what I call the “expressive rationality thesis.” The expressive rationality thesis holds that it CCR is rational at the individual level.

CCR is not necessarily conducive to formation of accurate beliefs under conditions in which opposing cultural groups are polarized.  But the “cost,” in effect, of persisting in a factually inaccurate view is zero; because an ordinary individual’s behavior—as, say, consumer or voter or participant in public debate—is too small to make a difference on climate change policy (let’s say), no action she takes on the basis of a mistaken belief about the facts will increase the risk she or anyone else she cares about faces.

The cost of forming a culturally deviant view on such a matter, however, is likely to be “high.” When positions on risk and like facts become akin to badges of membership in and loyalty to important affinity groups, forming the wrong ones can drive a wedge between individuals and others on whom they depend for support—material, emotional, and otherwise.

It therefore makes sense—is rational—for them to attend to information in issues like that (issues needn’t be that way; shouldn’t be allowed to become that way—but that’s another matter) in a manner that reliably aligns their beliefs with the ones that dominate in their group. One doesn’t have to have a science Ph.D. to do this. But if one does have a higher capacity to make sense of technical information, one can be expected to use that capacity to assure an even tighter fit between beliefs and identity—hence the magnification of cultural polarization as science comprehension grows.

3. Ideology, motivated reasoning & cognitive reflection

The “Ideology, motivated reasoning & cognitive reflection” experiment ) (IMRCR) picks up at this point in the development of the project to understand CCR.  The Nature Climate Change study was observational (correlation), and while it identified patterns of risk perception more consistent with CCR than alternative theories (ones focusing on popular deficiencies in system 2 reasoning, in particular), the results were still compatible with dynamics other than “expressive rationality” as I’ve described it.  The IMRCR study uses experimental means to corroborate the “expressive rationality” interpretation of the Nature Climate Change study data.

It also does something else.  As we have been charting the mechanisms of CCR, other researchers and commentators have advanced an alternative IMR (ideologically motivated reasoning) position, which I’ve labeled the “asymmetry thesis.” The asymmetry thesis attributes polarization over climate change and other risks and facts that admit of scientific investigation to the distinctive vulnerability of conservatives to IMR. Some (like Chris Mooney) believe the CCR results are consistent with IMR; I think they are not but that they really haven’t been aimed at testing the asymmetry thesis. 

The IMRCR study was designed to address that issue more directly, too. Indeed, I used ideology and party affiliation—political orientation—rather than cultural predisposition as the hypothesized motivating influence for information processing in the experiment to make the results as commensurable as possible with those featured in studies relied upon by proponents of the asymmetry thesis. In fact, I see political orientation variables as simply an alternative indicators of the same motivating disposition that cultural predispositions measure; I think the latter are better, but for present purposes political was sufficient (I can reproduce the data with cultural outlooks and get stronger results, in fact).

In the study, I find that political orientations exert a symmetrical impact on information processing. That is, “liberals” are as disposed as “conservatives” to assign weight to evidence based on the congeniality of crediting that evidence to their ideological predispositions (in other words, to assign a likelihood ratio to it that fits their goal to “express” their group commitments).

In addition, for both the effect is magnified by higher “cognitive reflection” scores.  This result is consistent with—and furnishes experimental corroboration of—the “expressive rationality” interpretation of the Nature Climate Change study.

4. So—“is ideologically motivated reasoning rational? And do only conservatives engage in it?”

The answer to the second question—only conservatives?—is I think “no!”

I didn’t expect a different answer before I did the IMRCR experiment. First, I regarded the designs and measures used in studies that were thought to support the “asymmetry thesis” as ill-suited for testing it. Second, to me the theory for the “asymmetry thesis” didn’t make sense; the motivation that I think it is most plausible to see as generating polarization of the sort measured by CCR is protection of one’s membership and status within an important affinity group—and the sorts of groups to which that dynamic applies are not confined to political ones (people feel them, and react accordingly with respect to, their connection to sports teams and schools). So why expect only conservatives to experience IMR??

But the way to resolve such questions is to design valid studies, make observations, and draw valid inferences.  I tried to that with the IMRCR study, and came away believing more strongly that IMR is symmetric across the ideological spectrum and CCR symmetric across cultural spectra.  Show me more evidence and (I hope) I will assign it the weigh (likelihood ratio) it is due and revise my position accordingly.

The answer to the second question—is IMR rational—is, “It depends!”  The result of the IMRCR study supported the “expressive rationality” hypothesis, which, in my mind, makes even less supportable than it was before the hypothesis that IMR is a consequence of heuristic-driven, bias prone “system 1 reasoning.”

But to say that IMR is “expressively rational” and therefore “rational” tout court is unsatisfying to me. For one thing, as emphasized in the Nature Climate Change paper and the IMRCR paper, even if it is individually rational for individuals to form their perceptions of a disputed risk issue in a way that protects their connection to their cultural or ideological affinity groups, it can be collectively disastrous for them to do that simultaneously, because in that circumstance democratically accountable actors will be less likely to converge on evidence relevant to the common interests of culturally diverse groups. We can say in this regard that what is expressive rational at the individual level is collectively irrational.  This makes CCR part of a collective action problem that demands an appropriate collective action solution.

In addition, I don’t think it is possible, in fact, to specify whether any form of cognition is “rational” without an account of whether it conduces or frustrates the ends of the person who displays it.  A person might find MR that projects his or her identity as a sports fan, e.g., to be very welcome—and yet regard MR (or even the prospect that it might be influencing her) totally unacceptable  if she is to be a referee.  I think people would generally be disturbed if they understood that as jurors in a case like the one featured in They Saw a Protest they were perceiving facts relevant to other citizens’ free speech rights in a way that reflected IMR.

Maybe some people would find it unsatisfying to learn that CCR or IMR is influencing how they are forming their perceptions of facts on issues like climate change or gun control, too? I bet they would be very distressed to discover that their assessments of risk were being influenced by CCR if they were parents deciding whether the HPV vaccine is good or bad for the health of their daughter.

Chris Johnston's book The Ambivalent Partisan is very relevant in this respect. Chris and his co-authors purport to find a class of citizens who don’t display the form of IMR (or CCR, I presume) that I believe I am measuring in the IMRCR paper.  They see them as ideally virtuous citizens. It is hard to disagree.  And hence it is confusing for me to know what to think about the significance of thing that I think (or thought!) I understood.  So I need to think more. Good!



More on the political sensitivity of communicating the significance of climate model recalibration

I posted something a few days about the political sensitivity of communicating information about scientists' critical assessments of the performance of climate models.  

In fact, such assessments are unremarkable. The development of forecasting models for complex dynamics (as Nate Silver explains in his wonderful book Signal and Noise) is an iterative process in which modelers fully expect predictions to be off, but also to become progressively better as competing specifications of the relevant parameters are identified and calibrated in response to observation.  

In this sort of process at least, models are not understood to be a test of the validity of the scientific theories or evidence on the basic mechanisms involved. They are a tool for trying to improve the ability to predict with greater precision how such mechanisms will interact, a process the complexity of which cannot be reduced to a tractable, determine algorithm or formula. The use of modeling (which involve statistical techniques for simulating "stochastic" processes) can generate tremendous advances in knowledge in such circumstances, as Silver documents.  

But such advances take time -- or, in an case, repeated trials, in which model predictions are made, results observed, and models recalibrated. In this recursive process, erroneous predictions are not failures; they are a fully expected and welcome form of information that enables modelers to pinpoint the respects in which the models can be improved.

Of course, if improvement fails to occur despite repeated trials and recalibrations, that's a serious problem. It might mean the underlying theory about the relevant mechanisms is wrong, although that's not the only possibility. There are phenomena that in their nature cannot be "forecast" even when their basic mechanisms are understood; earthquakes are probably an example--our best understanding of why they happen suggests we'll likely never be able to say when.

Usually, none of this causes anyone any concern.  The manifest errors and persistent imprecision of earlier generations of models didn't stop meteorologists from developing what now are weather forecasting simulations that are a thing of wonder (but that are still being improved!). Our inability to say when earthquakes will occur doesn't cause us to conclude that they must be caused by sodomy rather than shifting tectonic plates after all--or from using the scientific knowledge we do have about earthquakes to improve our ability to protect ourselves from the risks they pose.

Nevertheless, on a culturally polarized issue like climate change, this iterative, progressive aspect of modeling does create an opportunity to generate public confusion.  If one's goal is to furnish members of the public with reason to wonder whether the mechanisms of climate change are adequately understood--and to discount the need to engage in constructive action to minimize the the risks that climate change poses or the extent of the adverse impacts it could have for human beings--then one can obscure the difference between the sort of experimental "prediction" used to identify mechanisms and the sort of modeling "prediction" used to improve forecasting of the complex ("stochastic") interplay of such mechanisms.  Then when the latter sort of models generate their inevitable--indeed, their expected and even welcome failures--one can pounce and say, "See? Even the scientists themelves are now having to admit they were wrong!"

Silver highlights this point in the chapter on Signal & Noise devoted to climate forecasting, and discusses (with sympathy as well as discernment) the difficult spot that this puts climate scientists and climate-risk communicators in.

As I discussed in my post, this dilemma was posed by an article in the Economist last week that reported on the state of scientific engagement with the performance of climate model predictions on the relationship between CO2 emissions and surface temperatures.  Such engagement takes the form of debate -- or as Popper elegantly characterized it "conjecture and refutation," in which alternative explanations are competitively interrogated with observation in a way calculated to help isolate the more-likely-true from the vast sea of the plausible.

In fact, there was nothing in the article that suggested that the scientists engaged in this form of inquiry disagreed about the fundamentals of climate science. Or that any one of them dissents from the propositions that

(1) climate change (including, principally, global warming) is and has been occurring for decades as a result of human CO2 emissions;
(2) such change has already and will (irreversibly) continue to have various adverse impacts for many human populations; and
(3) the impacts will only be bigger and more adverse if CO2 emissions continue.

(These propositions, btw, don't come close to dictating what policy responses -- one or another form of "mitigation" via carbon taxes or the like; "adaptation" measures; or even geoengineering-- makes sense for any nation or collection of them.)

Maybe (1)-(3) are wrong?

I happen to  think they are correct, a conclusion arrived at through my exercise of the faculties one uses to recognize what is known to science. My recognition faculties, of course, are imperfect, as are everyone else's, and, like everyone else's are less reliable in a polluted science communication environment such as the one that engulfs the climate change issue.  

But the point is, whether those propositions are right or wrong isn't something that the debate reported on in the Economist article bears on one way or the other. The scientists involved in that debate agree on that. Any scientist or anyone else who disagrees about these propositions has to stake his or her  case on things other than the performance of the latest generation of models in predicting surface temperatures.

Well, what to add to all of this?

Surveying responses to the Economist article, one will observe some skeptics (but in fact not all; I can easily find internet comments from skeptics who recognize that the debate described in the Economist article doesn't go to fundamentals) are nevertheless trying to cite the debate it describes as evidence that climate change does not pose risks that merit a significant policy response.  They are trying to foster confusion, in other words, about the nature of the models that the scientists are recalibrating. Unsurprising.

But it is also clear that some climate-change policy advocates are responding by crediting that same misunderstanding of the models. These responses are denigrating the Economist article (which did not get the point I'm making about models wrong!) as a deliberate effort to mislead, and are defending the predictions of the previous generation of models as if the credibility of the case for adopting policies in response to climate change really does turn on whether the predictions of those models "are too!" correct.

I guess that's not surprising either, but it is depressing.

The truth is, most citizens on both sides of the climate debate are not forming their sense of whether and how our democracy should respond to climate change by following scientific debates over the precision of climate models.

What ordinary citizens do base their view of the climate change issue on is how others who share basic moral & cultural outlooks seem to regard it. The reason there is so much confusion about climate change in our society is that what ordinary citizens see when they take note of the climate change issue is those with whom they share an affinity locked in a bitter, recriminatory exchange with those who don't.

But all the same it is still a huge mistake for climate-change risk communicators to address these perfectly intelligent and perfectly ordinary citizens with a version of the scientific process here that evades and equivocates on, or outright denies that, climate scientists are engaged in model calibration.

In an open society--the only sort in which science can actually take place!--this form of normal science is plain to see.  Indignantly denouncing those who accurately report that it's taking place as if they themselves were liars embroils those who are trying to communicate risk in a huge, disturbing spectacle ripe with all the information about "us vs. them" that makes communicating science here so difficult.  

I admit that I believe that is wrong, in itself, to offer any argument in democratic debate that denies the premise that the person whom one is trying to persuade or inform merits respect as a self-governing individual who is entitled to use his or her reason to figure out what the facts are and what to do in response.

But I think it is not merely motivated reasoning on my part to think that the best strategy for countering those who would distort how science works is to offer a reasoned critique of those doing the distorting--not to engage in countervailing distortion.

One reason I believe that is that I have in fact seen evidence of it being done effectively.

Check out Zeke Hausfather's very nice discussion of the issue at the Yale Forum on Climate Change and the Media. It was written before the publication of the Economist article, but my attention was drawn to it by Skeptical Science, which discerningly that its thoughtful discussion of the recent debate furnishes a much more constructive response to the Economist news report than an attempt to deny that scientists are doing what scientists do.


The equivalence of the "science communication" and "judicial neutrality communication" problems

Gave a talk for the Yale Law School Executive Committee today.  

Basic claim was the psychological & professional equivalence of the "science communication problem"and the "judicial neutrality communication problem."  

1. Just as doing valid science doesn't communicate the validity of it to citizens whose collective decisions need to be informed by science, so doing neutral decisionmaking doesn't convey the neutrality of it to citizens whose rights or interests or status is being affected by law.

2. As a result, cultural polarization can be expected to occur about the neutrality of constitutional decisions even when those decisions have been resolved "neutrally" with reference to the craft norms of law, just as cultural polarization can be expected over the validity of science even when scientists are doing valid science with reference to the craft norms of science.

3. The "science of science communication" is about using science to improve the communication of valid science in democracy.  Its success depends on the integration of that science into the training of scientists and science-informed policymakers.

4. Law similarly needs a "science of neutrality communication." And it success will depend on law scholars committing themselves to producing it, law schools instructing students in it, and the profession including the judiciary becoming active participants in shaping its direction and use. 

Slides here.


"A sensitive matter" indeed! The science communication risks of climate model recalibration

An article from The Economist reports on ferment within the climate-modeling community over how to account for the failure of rising global temperatures to keep pace with increasing carbon emissions.

"Over the past 15 years air temperatures at the Earth’s ssurface have been flat while greenhouse-gas emissions have continued to soar," the article states.

The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO₂ put there by humanity since 1750. And yet, as James Hansen, the head of NASA’s Goddard Institute for Space Studies, observes, “the five-year mean global temperature has been flat for a decade.”

 "[S]urface temperatures since 2005 are already at the low end of the range of projections derived from 20 climate models," the article continues. "If they remain flat, they will fall outside the models’ range within a few years."

Naturally, "the mismatch between rising greenhouse-gas emissions and not-rising temperatures is among the biggest puzzles in climate science just now." Professional discourse among climate scientists is abuzz with competing conjectures: from the existing models' uniform underestimation of historical temperature variability to the greater heat-absorptive capacity of the oceans to the still poorly understood heat-reflective properties of clouds

There are lots of things one could say. But here are three.

First, this kind of collective reassessment is not a sign that there's any sort of defect or flaw in mainstream climate science.  What the article is describing is not a crisis of any sort; it is "normal" -- as in perfectly consistent with the sort of backing and filling that characterizes the "normal science" mission of identifying, interrogating, and resolving anomalies on terms that conserve the prevailing best understanding of how the world works.

It is perfectly unremarkable in particular for the project of statistical modeling of dynamic processes to encounter forecasting shortfalls of this kind and magnitude. Model building is inherently iterative. Modelers greet incorrect predictions not as "disconfirming" evidence of their basic theory -- as might, say, an experimenter who is testing competing conjectures about the how world works--but as informative feedback episodes that enable progressively more accurate discernment and calibration of the parameters of an equation (in effect) that can be used to make the implications of that theory discernable and controllable.

Or in any case, this is how things work when things are working. One expects, tolerates, and indeed exploits erroneous forecasts so long as one is making progress and doesn't "give up" unless and until the persistence or nature of such errors furnishes a basis for questioning the fundamental tenets of the model-- the basic picture or theory of reality that it presupposes--at which point the business of modeling must be largely suspended pending discernment by empirical means of a more satisfactory account of the basic mechanisms of the process to be modeled.

Which gets me to my second point: the sorts of difficulties that climate modelers are encountering aren't anywhere close to the kinds of difficulties that would warrant the conclusion that their underlying sense of how the climate works is unsound. Indeed, nothing in the discrepancy between modeling forecasts and the temperature record of the last decade suggests reason to revise the assessment that the impact of human carbon emissions poses a serious danger to human wellbeing that it is essential to address--a point the Economist article (an outstanding piece of science journalism, in my estimation) is perfectly clear about.  

Indeed, if anything, one might view the apparent need to revise downward slightly the range of likely global temperature increases associated with past and anticipated CO₂ emissions as reason to believe that there might be more profit to be had in investing in mitigation, which recent work, based on existing models about the expected warming impact of previous and likely emissions, suggested would be unlikely to avert catastrophic impacts in any case.

Yet here is the third & most troubling point: communicating the significance of these unremarkable shortingcomings will pose a tremendous political challenge. 

The Economist article--which in my view is an excellent piece of science journalism--doesn't address this particular issue. But Nate Silver insightfully does in his book The Signal and the Noise.  

Like much about Bayesian inference, the idea that being wrong can be as informative as (and often even more informative than) being right doesn't jibe well with normal intuitions.

But for climate change in particular, this difficulty is aggravated by a communication strategy that renders the admission of erroneous prediction extremely perilous.  Climate change poses urgent risks. But as Sliver points out,  the urgent attention it warrants has been purchased in significant part with the currency of emphatic denunciation and ridicule of those who have questioned the existing generation of forecasting models.

No doubt this element of the climate risk communication strategy was adopted in part out of a perception of perceived political necessity. By no means all who have raised questions about those models have done so in bad faith; indeed, because it is only through the competitive interplay of competing conjecture that anything is ever learned in science, those who doubt successful theories make a necessary contribution to their vindication.  

But still, many of those actors--mainly nonscientists--who have been most conspicuous in questioning the past generation of models clearly were intent on sowing confusion and division.  They were acting on bad faith motivations. To discredit them, climate risk communicators have repeatedly pointed out that the models these actors were attacking were supported by scientific consensus.

Yet now these critics stand to reap a huge political, rhetorical windfall as climate scientists appropriately take stock of the shortcomings in the last generation of models.

Again, such reappraisal doesn't mean that the theory underlying those models was incorrect or that there isn't an urgent need to act to protect ourselves from cliamte change risks. Modeling errors are inevitable, foreseeable, and indeed informative.  

But because the case for crediting that theory and taking account of those risks was staked on the inappropriateness of challenging the accuracy of scientific consensus, climate advocates will find themselves on the defensive.

What to do?

The answer is not simple, of course.

But at least part of it is to avoid unjustified simplification.  

Members of the pulbic, it's true, aren't scientists; that's what makes science communication so challenging.

But they aren't stupid, either. That's what makes resorting to "simplified" claims that aren't scientifically defensible or realistic a bad strategy for science communication. 


Question: Who is more disposed to motivated reasoning on climate change -- hierarchical individualists or egalitarian communitarians? Answer: Both!


So it started innocently with a query from a colleague about whether the principal result in CCP’s Nature Climate Change study—which found that increased science comprehension (science literacy & numeracy) magnifies cultural polarization—might be in some way attributable to the “white male effect,” which refers to the tendency of white males to be less concerned with environmental risks than are women and nonwhites.

That seemed unlikely to me, seeing how the “white male effect” is itself very strongly linked to the extreme risk skepticism of white hierarchical individualist males (on certain risks at least).  But I thought the simple thing was just to plot the effect of increasing science comprehension on climate change risk perceptions separately for hierarchical and egalitarian white males, hierarchical and egalitarian females, and hierarchical and egalitarian nonwhites (individualism is uncorrelated with gender and race so I left it out just to make the task simpler).

That exercise generated one expected result and one slightly unexpected one. The expected result was that the effect of science comprehension in magnifying cultural polarization was clearly shown not to be confined to white males.

The less expected one was what looked like a slightly larger impact of science comprehension on hierarchs than egalitarians.

Actually, I’d noticed this before but never really thought about its significance, since it wasn’t relevant to the competing study hypotheses (viz., that science comprehension would reduce cultural polarization or that it would magnify it).

But it sort of fit the “asymmetry thesis” – the idea, which I associate mainly with Chris Mooney, that motivated reasoning is disproportionately concentrated in more “conservative” types (hierarchical individualists are more conservative than egalitarian communitarians—but the differences aren’t as big as you might think). 

The pattern only sort of fits because in fact the “asymmetry thesis” isn’t about whether higher-level information processing (of the sort for which science comprehension is a proxy) generates greater bias in conservatives than liberals but only about whether conservatives are more ideologically biased, period.  Indeed, the usual story for the asymmetry thesis (John Jost’s, e.g.) is that conservatives are supposedly disposed to only heuristic rather than systematic information processing and thus to refrain from open-mindedly considering contrary evidence.

But here it seemed like maybe the data could be seen as suggesting that more reflective conservative respondents were more likely to display the fitting of risk perception to values—the signature of motivated reasoning.  That would be a novel variation of the asymmetry thesis but still a version of it.

In fact, I don’t think the asymmetry thesis is right.  I don’t think it makes sense, actually; the mechanisms for culturally or ideologically motivated reasoning involve group affinities generally, and extend to all manner of cognition (even to brute sense impressions), so why expect only “conservatives” to display it in connection with scientific data on risk issues like climate change or the HPV vaccine or gun control or nuclear power etc?

Indeed, I’ve now done one study—an experimental one—that was specifically geared to testing the asymmetry thesis, and it generated findings inconsistent with it: It showed that both “conservatives” and “liberals” are prone to motivated reasoning, and (pace Jost) the effect gets bigger as individuals become more disposed to use conscious, effortful information processing.

But seeing what looked like evidence potentially supportive of the asymmetry thesis, and having been attentive to avail myself of every opportunity to alert others when I saw what looked like contrary evidence, I thought it was very appropriate that I advise my (billions of) readers of what looked like a potential instance of asymmetry in my data, and also that I investigate this more closely (things I promised I would do at the end of my last blog entry).

So I reanalyzed the Nature Climate Change data in a way that I am convinced is the appropriate way to test for “asymmetry.”

Again, the “asymmetry thesis” asserts, in effect, that motivated reasoning (of which cultural cognition is one subclass) is disproportionately concentrated in more right-leaning individuals. As I’ve explained before, that expectation implies that a nonliner model—one in which the manifestation of motivated reasoning is posited to be uneven across ideology—ought to fit the data better than a linear one, in which the impact of motivated reasoning is posited to be uniform across ideology.

In fact, neither a linear model nor any analytically tractable nonlinear model can plausibly be understood to be a “true” representation of the dynamics involved.  But the goal of fitting a model to the data, in this context, isn’t to figure out the “true” impact of the mechanisms involved; it is to test competing conjectures about what those mechanisms might be.

The competing hypotheses are that cultural cognition (or any like form of motivated reasoning) is symmetric with respect to cultural predispositions, on the one hand, and that it is asymmetrically concentrated in hierarch individualists, on the other.  If the former hypothesis is correct, a linear model—while almost certainly not “true”—ought to fit better than a nonlinear one; likewise, while any particular nonlinear model we impose on the data will almost certainly not be “true,” a reasonable approximation of a distribution that the asymmetry thesis expects ought to fit better if the asymmetry thesis is correct.

So apply these two models, evaluate the relative fit of the two, and adjust our assessments of the relative likelihood of the competing hypotheses accordingly.  Simple!

Actually, the first step is to try to see if we can simply see the posited patterns in the data. We’ll want to fit statistical models to the data to test whether we aren’t “seeing things”—to corroborate that apparent effects are “really” there and are unlikely to be a consequence of chance.  But we don’t want to engage in statistical “mysticism” of the sort by which effects that are invisible are magically made to appear through the application of complex statistical manipulations (this is a form of witchcraft masquerading as science; sometime in the future I will dedicate a blog post to denouncing it in terms so emphatic that it will raise questions about my sanity—or I should say additional ones).

So consider this:


It’s a simple scatter plot of subjects whose cultural outlooks are on average both “egalitarian” and “communitarian” (green), on the one hand, and ones whose outlooks are on average “hierarchical” and “individualistic (black), on the other. On tope of that, I’ve plotted LOWESS or “locally weighted scatter plot smoothing” lines. This technique, in effect, “fits” regression lines to tiny subsegments of the data rather than to all of it at once. 

It can’t be relied on to reveal trustworthy relationships in the data because it is a recipe for “overfitting,” i.e., treating “noisy” observations as if they were informative ones.  But it is a very nice device for enabling us to see what the data look like.  If the impact of motivated reasoning is asymmetric—if it increases as subjects become more hierarchical and individualistic—we ought to be able to see something like that in the raw data, which the LOWESS lines are now affording us an even clearer view of.

I see two intriguing things.  One is evidence that hierarch individualists are indeed more influenced—more inclined to form identity-congruent risk perceptions—as science comprehension increases: the difference between “low” science comprehension HIs and “high” ones is about 4 units on the 11-point risk-perception scale; the difference between “low” and “high” ECs is less than 2.

However, the impact of science comprehension is bigger for ECs than HIs at the highest levels of science comprehension. The curve slopes down but flattens out for HIs near the far right. For ECs, the effect of increased science comprehension is pretty negligible until one gets to the far right—the highest score on science comprehension—at which point it suddenly juts up.

If we can believe our eyes here, we have a sort of mixed verdict.  Overall, HIs are more likely to form identity-congruent risk perceptions as they become more science comprehending; but ECs with the very highest level of science comprehension are considerably more likely to exhibit this feature of motivated reasoning than the most science comprehending HIs.

To see if we should believe what the “raw data” could be see to be telling us, I fit two regression models to the data. One assumed the impact of science comprehension on the tendency to form identity-congruent risk perceptions was linear or uniform across the range of the hierarchy and individualist worldview dimensions.  The other assumed that it was “curvilinear”: essentially, I added terms to the model so that it now reflected a quadratic regression equation. Comparing the “fit” of these two models, I expected, would allow me to determine which of the two relationships assumed by the models—linear, or symmetric; or curvilinear, asymmetric—was more likely true.

click me --please! Please!Result: The more complicated polynomial regression did fit better—had a slightly higher R2 – than the linear one. The difference was only “marginally” significant (p = 0.06). But there’s no reason to stack the deck in favor of the hypothesis that the linear model fits better; if I started off with the assumption that the two hypotheses were equally likely, I’d actually be much more likely to be making a mistake to infer that the polynomial model doesn’t fit better than I would be to infer that it does when p = 0.06!

In addition, the model corroborates the nuanced story of the LOWESS-enhanced picture of the raw data.  It’s hard to tell this just from scrutinizing the coefficients of the regression output, so I’ve graphed the fitted values of the model (the predicted risk perceptions for study subjects) and fit “smoothed” lines to that (the lines consisted of gray zones, which corresponded to the values within the 0.95 confidence range).  You can see that the decrease in risk perception for HIs is more or less uniform as science comprehension increases, whereas for ECs it is flat but starts to bow upward toward the extreme upper bound of science comprehension. In other words, HIs show more “motivated reasoning” conditional on science comprehension overall; but ECs who comprehend science the “best” are most likely to display this effect.

What to make of this? 

Well, not that much in my view!  As I said, it is a “split” verdict: the “asymmetric” relationship between science comprehension and the disposition to form identity-congruent risk perceptions suggests that each side is engaged in “more” motivated reasoning as science comprehension increases in one sense or another.

In addition, one’s interpretation of the output is hard to disentangle from one’s view about what the “truth of the matter” is on climate change.  If one assumes that climate change risks perceptions are lower than they should be at the sample mean, then HIs are unambiguously engaged in motivated reasoning conditional on science comprehension, whereas ECs are simply adjusting toward a more “correct” view at the upper range.  In contrast, if one believed that climate change risks are generally overstated, then one could see the results as corroborating that HIs are forming a “more accurate” view as they become more science comprehending, whereas ECs do not and in fact become more likely to latch onto the overstated view as they become most science comprehending.

I think I’m not inclined to revise upward or downward my assessment of the (low) likelihood of the asymmetry thesis on the basis of these data. I’m inclined to say we should continue investigating, and focus on designs (experimental ones, in particular) that are more specifically geared to generating clear evidence one way or the other.

But maybe others will have still other things to say.



Is the culturally polarizing effect of science literacy on climate change risk perceptions related to the "white male effect"? Does the answer tell us anything about the "asymmetry thesis"?!

In a study of science comprehension and climate change risks, CCP researchers found that cultural polarization, rather than shrinking, actually grows as people become more science literate & numerate.

A colleague asked me:

Is it possible that some of the relationships with science literacy/numeracy in the Nature Climate Change paper might come from correlations with individual differences known to correlate with risk perception (e.g., gender, ethnicity)?

I came up with a complicated analytical answer to explain why I really doubted this could be but then I realized of course that the simple way to answer the question is just to "look" at the data:

Nothing fancy: just divided the sample into hierarchical & egalitarian (median split on worldview score) "white males," "women," and "nonwhites" & then plotted the relationship between climate change risk perception (y-axis) & score on the "science literacy/numeracy" or "science comprehension" scale (x-). I left out individualism, first, to make the graphing task simpler, and second, b/c only hierarchy correlates w/ gender (r = 0.10) and being white (r = 0.25); putting individualism in would increase the effects a bit -- both the cultural divide & slopes of the curves -- but not really change the "picture" (or have any impact on the question of whether race & gender rather than culture explain the polarizing  impact of science comprehension).

Some of the things these scatterplots show:

1. The impact of science comprehension in magnifying polarization in risk perception is not restricted to white males (the answer to the question posed). The same pattern--polarization increasing as science comprehension increases -- is present in all three plots.

2. The "white male effect" -- the observed tendency of white males to perceive risk to be lower -- is actually a "white male hierarch" effect.  If you look at the blue lines, you can see they are more or less in the same place on the y-axis; the red line is "lower" for white males, in contrast. This is consistent with prior CCP research that suggests that the "effect" is driven by culturally motivated reasoning: white male hierarch individualists have a cultural stake in perceiving environmental and technological risks to be low; egalitarian communitarians -- among whom there are no meaningful gender or race differences--have a stake in viewing such risks to be high.

3. The increased-polarization effect looks like it is mainly concentrated in "hierararchs."  That is, the blue lines are flatter -- not sloped upward as much as the red lines are sloped downward.  

This is a pattern that would bring -- if not joy to his heart -- a measure of corroboration to Chris Mooney's "Republican Brain" hypothesis (RBH), since it is consistent with the impact of culturally motivated reasoning being higher in more "conservative" (hierarchs are more conservative; but the partisan differences among egalitarian communitarians and hierarch individualists aren't huge!).  Actually, I think CM sees the paper as consistent with his position already, but this look at the data is distinctive, since it suggests that the magnification of cultural polarization is concentrated in the more conservative cultural subjects.

As I've said a billion times (although not recently), I am unpersuaded by RBH.  I have done a study that was designed specifically to test it (this study wasn't), and it generated evidence that suggests ideologically motivated reasoning--in addition to being magnified by greater cognitive reflection-- is politically symmetric, or uniform across the ideological spectrum.

But the point is, no study ever proves a proposition. It merely furnishes evidence that gives us reason to view one hypothesis or another as more likely to be true or less than we otherwise would have had (or at least it does if the study is valid).  So one should simply give evidence the weight that one judges it to be due (based on the nature of the design and strength of the effect), and update the relative probabilities one assigns to the competing hypotheses.

If this pattern is evidence more consistent with RBH, then fine. I will count it as such.  And aggregate it with the evidence I have that goes the other way.  I'd still at that point tend to believe RBH is false, but I would be less convinced that it is false then before.

Now: should I view this evidence as more consistent with RBH?  I said that it looks like that.  But in fact, before treating it as such, I'd do another statistical test: I'd fit a polynomial model to the data to confirm both that the effect of culturally motivated reasoning increases as subjects become more hierarchical and that the increase is large enough to warrant concluding that what were looking at isn't the sort of lumpy impact of an effect that could easily occur by chance.

I performed that sort of test in the study I did on cognitive reflection and ideologically motivated reasoning and concluded that there was no meaningful "asymmetry" in the motivated reasoning effect that study observed. But it was also the case that the raw data didn't even look asymmetrical in that study.

So ... I will perform that test now on these data.  I don't know what it will reveal.  But I make two promises: (a) to tell you what the result is; and (b) to adjust my priors on RBH accordingly.

Stay tuned!




Who *are* these guys? Cultural cognition profiling, part 2

This is my answer to Jen Briselli, who asked me to supply sketches of a typical "hierarchical individualist," a typical "hierarchical communitarian," a typical "egalitarian individualist" and a typical "egalitarian communitarian." I started with a big long proviso about how ordinary people with these identities are, and how diverse, too, even in relation to others who share their outlooks.  But I agreed with her on the value--and in some sense the indispensably--of heuristic representations of them. Still, one more essential proviso is necessary.  These people are make believe.  Moreover, the sketches are the product of introspection. My impressions are not wholly uniformed, of course; I think I know "who these guys are," in part from reading richer histories and ethnographies that seem pertinent, in part from trying to find such people and listening to them (e.g., as they interact with each other in focus groups conducted by Don Braman), in part from collecting evidence about how people who I think are like this think, and in part from simply observing and reflecting on everyday life. But I am not an ethnographer, or a journalist; these are not real individuals or even composites of identifiable people. They are not themselves evidence of anything. Rather they are models, of a sort that I might summon to mind to stimulate and structure my own conjectures about why things are as they are and what sorts of evidence I might look for that would help to figure out if I'm right. Now I am turning them into a device: something I am showing you to help you form a more vivid picture of what I see; to enable you, as a result, to form more confident judgments about whether the evidence that my collaborators and I collect do really furnish reason to believe that cultural cognition explains certain puzzling things; and finally to entice or provoke you into looking for even more evidence that would give us either more reason or less to believe the same, and thus help us both to get closer to the truth.


Steve, 62 years old, lives in Marietta, Ga. Trained in engineering at Georgia Tech, he founded and now operates a successful laboratory supply business, whose customers include local pharmaceutical and biotech companies, as well as hospitals and universities.   He has been married for thirty-eight years to Donna, a fulltime homemaker, and has two grown children, Gary and Tammy.  He is a Presbyterian, but unlike Donna he attends church only irregularly. He characterizes himself as “Independent who leans Republican,” and a “moderate” who, if pushed, is “slightly conservative"; nevertheless, except for a brief time when he thought Newt Gingrich might win the Republican nomination, the 2012 election filled him with a mix of frustration and resignation.  He hunts, and owns a handgun. He served as a scout leader when Gary was growing up. Now he sits on the board of directors for the Georgia State Museum of Science and Industry, to which he has made large donations in the past (Steve proposed and helped design an exhibit on “nanotechnology,” which proved extremely popular).  He owns a prized collection of memorabilia relating to the “Wizard of Menlo Park,” Thomas Edison.

Sharon44, lives in Stillwater Oklahoma.  She is married to Stephen, a Baptist minister, and has three children. She is pro-life and believes God created the earth 6000 years ago. She once served as the foreperson on a jury that acquitted an Oklahoma State athlete in a controversial “date rape” case.  She teaches 5th grade at a public elementary school, a job that she feels very passionate about. Her year-long “science unit” in 2011-12 revolved (as it were) around the transit of Venus, and culminated in the viewing of the event. The experience thrilled (nearly) all the students, but profoundly moved one in particular, the ten-year old daughter of a close friend and member of Sharon’s church congregation; two decades from now this girl will be a leading astrophysicist on the faculty of the University of Chicago.

Lisa, 36 years old, lives in New York City. She’s a lawyer, who was just promoted to partner at her firm (she anticipated this would make her more excited than it did).  She has been married for nine years to Nathan, an investment banker. The couple has a five-year old son, who has been cared for since infancy by an au pair, and for whom they secured a highly coveted spot in the kindergarten class of an exclusive private school.  Lisa happens to be Jewish; she doesn’t attend synagogue but she does celebrate Jewish holidays with family and close friends.  She is pro-choice, and as a law student spent most of her final year working on a clinic lawsuit to enjoin Operation Rescue from “blockading” abortion clinics.  An issue that has agitated her recently is the pressure that is directed at women to breastfeed their children; when the New York city health department instituted restrictions on access to formula in hospital maternity wards, she composed an angry letter to the editor of the New York Times, denouncing  “counterfeit feminists, who are all for free choice until a woman makes one they don’t like.... Having a baby doesn't make a woman an infant!” She and Nathan do not have very much leisure time. But they do take delight in watching the television show MythBusters, each episode of which they record on their DVR for shared future consumption.

Linda, 42, is a social worker in Philadelphia; Bernie, 58, is a professor of political science at the University of Vermont. Linda raised her now 20-year-old daughter (a junior at Temple) as a single parent. She is active in her church (the historic African Episcopal Church of St. Thomas). Bernie has never been married, has no children, and is an atheist. Both describe themselves as “Independents” who “lean Democrat” and as “slightly liberal,” and while they see eye-to-eye on many matters  (such as the low level of danger posed by the fleeing driver in the police-chase video featured in Scott v. Harris), they sharply disagree about certain issues (including legalization of marijuana, which Linda adamantly opposes and Bernie strongly supports).  They both watch Nova, and make annual contributions to their local PBS affiliates.  

Do you have intuitions about these people's beliefs on climate change? The risks and benefits of the HPV vaccine? Whether permitting ordinary citizens to carry concealed handguns in public increases crime—or instead deters it? Is any of them worried about the health effects of consuming GM foods?

None of them knows what synthetic biology is.  Is it possible to predict how they might feel about it once they learn something about it?  Might they all turn out to agree someday that it is very useful (possibly even fascinating!) and count it as one of the things that makes them answer “a lot” (as they all will) when asked, “How much do scientists contribute to the well-being of society?”


Who *are* these guys? Cultural cognition profiling, part 1

Okay, this is the first of what I anticipate will be a series of posts (somewhere between 2 and 376 in number). In them, I want to take up the question of who the people are who are being described by the “cultural worldview” framework that informs cultural cognition. 

The specific occasion for wanting to do this is a query from Jen Briselli, which I reproduce below.  In it, you’ll see, she asks me to set forth brief sketches of the “typical” egalitarian communitarian, hierarchical individualist, hierarchical communitarian, and hierarchical individualist. This is a reasonable request.  In my immediate reply, I say that any such exercise has to be understood as a simplification or a heuristic; the people who have any one of these identities will be multifaceted and complex, and also diverse despite having important shared understandings.  

I think that’s a reasonable point to make, too – yet I then beg off on (or at least defer) actually responding to her request. That wasn’t so reasonable of me! 

So I will do as she asks.  

But I thought it would be useful, as well as interesting, to first ask others who are familiar with “cultural cognition” framework as I and others are elaborating it, how they might answer this question.  So that’s what I’m doing in this post, which reproduces the exchange between Jen and me. 

Below the exchange, I also set fort the sort of exposition of the “cultural worldview” framework, which we adapt from Mary Douglas, that typically appears in our papers.  I think this is basically the right way to set things out in the context of this species of writing. But the admitted abstractness of it is what motivates Jen’s reasonable request for something more concrete, more accessible, more engaging.

I’ll give my own answer to Jen’s question in the next post or two. I promise!

Jen Briselli:

I have a quick question/exercise for you: 

I am working through the process of creating what are essentially 'personas' (though I'm keeping them abstract) for each of the four quadrants of the group/grid cultural cognition map. While I feel pretty comfortable characterizing some of the high-level concerns and values of each worldview, I would certainly be silly to think my nine months' immersion in your research comes anywhere near the richness of your own mental model for this framework. So, to supplement my own work, I'd love to know how you would describe each worldview, in the most basic and simplified way, to someone unfamiliar with cultural cognition. (Well, maybe not totally unfamiliar, but in the process of learning it). That is, how do you joke about these quadrants? How do you describe them at cocktail parties?

For example, I found the fake book titles that you made up for the [HPV vaccine risk] study to be a great of example for personifying a prototypical example of each worldview. And I am interested in walking that line between prototype and stereotype, because that's where good design happens- we can oversimplify and stereotype to get at something's core, then step back up a few levels to find the sweet spot for understanding.

So, if you'd be so kind, what few words or phrases would you use to complete the following phrases, for each of the four worldviews? 

1) I feel most threatened by: 

2) What I value most: 

and optional but would be fun to see your answers:

3) the typical 'bumper sticker' or phrase that embodies each worldview: (for example- egalitarian communitarians might pick something like  "one for all and all for one!" I'm curious if you have any equivalents for the others rattling around in your brain- serious or absurd, or anywhere in between.)


What you are asking about here is complicated; I'm anxious to avoid a simple response that might not be clearly understood as very very simplified.

The truth is that I don't think people of these types are likely to use bumper stickers to announce their allegiances. Some would, certainly; but they are very extreme, unusual people! If not extreme in their values, extreme in how much they value expressing them. The goal is to understand very ordinary people -- & I hope that is who we are succeeding in modeling. 

I feel reasonably confident that I can find those people by getting them to respond to the sorts items we use in our worldview scales, or by doing a study that ties their perceptions of source credibility to the cues used in the HPV study. 

But I think if I said, "Watch for someone who gets in your face & says 'you should encourage your young boys to be more sensitive and less rough and tough' "-- that would paint an exaggerated picture. 

I think we do have reliable ways to pick out people who have the sorts of dispositions I'm talking about. But we live in a society where people interact w/ all sorts of others & actually are mindful not to force people different from them to engage in debates over issues like this. 

From Kahan, D.M., Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012), pp. 727-28:

The cultural theory of risk asserts that individual’s should be expected to form perceptions of risk that reflect and reinforce their commitment to one or another “cultural way of life” (Thompson, Ellis & Wildavsky 1990). The theory uses a scheme that characterizes cultural ways of life and supporting worldviews along two cross-cutting dimensions (Figure 1), which Douglas calls “group” and “grid” (Douglas, 1970; 1982). A “weak” group way of life inclines people toward an individualistic worldview, highly “competitive” in nature, in which people are expected to “fend for themselves” without collective assistance or interference (Rayner, 1992, p. 87). In a “strong” group way of life, in contrast, people “interact frequently in a wide range of activities” in which they “depend on one another” to achieve their joint ends. This mode of social organization “promotes values of solidarity rather than the competitiveness of weak group” (ibid., p. 87).

A  “high” grid way of life organizes itself through pervasive and stratified “role differentiation” (Gross & Rayner 1985, p. 6).  Goods and offices, duties and entitlements, are all “distributed on the basis of explicit public social classifications such as sex, color, . . . a bureaucratic office, descent in a senior clan or lineage, or point of progression through an age-grade system” (ibid, p. 6). It thus conduces to a “hierarchic” worldview that disposes people to “devote a great deal of attention to maintaining” the rank-based “constraints” that underwrite “their own position and interests” (Rayner 1990, p. 87).

Finally, a low grid way of life consists of an “egalitarian state of affairs in which no one is prevented from participation in any social role because he or she is the wrong sex, or is too old, or does not have the right family connections” (Rayner 1990, p. 87). It is supported by a correspondingly egalitarian worldview that emphatically denies that goods and offices, duties and entitlements, should be distributed on the basis of such rankings.

The cultural theory of risk makes two basic claims about the relationship between cultural ways of life so defined and risk perceptions. The first is that recognition of certain societal risks tends to cohere better with one or another way of life. One way of life prospers if people come to recognize that an activity symbolically or instrumentally aligned with a rival way of life is causing societal harm, in which case the activity becomes vulnerable to restriction, and those who benefit from that activity become the targets of authority-diminishing forms of blame (Douglas, 1966; 1992).

The second claim of cultural theory is that individuals gravitate toward perceptions of risk that advance the way of life they adhere to. “[M]oral concern guides not just response to the risk but the basic faculty of [risk] perception” (Douglas, 1985, p. 60). Each way of life and associated worldview “has its own typical risk portfolio,” which “shuts out perception of some dangers and highlights others,” in manners that selectively concentrate censure on activities that subvert its norms and deflect it away from activities integral to sustaining them (Douglas & Wildavsky 1982, pp. 8, 85). Because ways of life dispose their adherents selectively to perceive risks in this fashion, disputes about risk, Douglas and Wildavsky argue, are in essence parts of an “ongoing debate about the ideal society” (ibid, p. 36).

The paradigmatic case, for Douglas and Wildavsky, is environmental risk perception. Persons disposed toward the individualistic worldview supportive of a weak group way of life should, on this account, be disposed to react very dismissively to claims of environmental and technological risk because they recognize (how or why exactly is a matter to consider presently) that the crediting of those claims would lead to restrictions on commerce and industry, forms of behavior they like. The same orientation toward environmental risk should be expected for individuals who adhere to the hierarchical worldview: in concerns with environmental risks, they will apprehend an implicit indictment of the competence and authority of societal elites. Individuals who tend toward the egalitarian and solidaristic worldview characteristic of strong group and low grid, in contrast, dislike commerce and industry, which they see as sources of unjust social disparities, and as symbols of noxious self-seeking, They therefore find it congenital to credit claims that those activities are harmful—a conclusion that does indeed support censure of those who engage in them and restriction of their signature forms of behavior (Wildavsky & Dake 1990; Thompson, Ellis, & Wildavsky 1990).



Effective graphic presentation of climate-change risk information? (Science of science communication course exercise)

In today's session of Science of Science Communication course, we are discussing readings on effective communication of of probabilistic risk information.  The topic is actually really cool, with lots of empirical work on the mechanisms that tend to interfere with (indeed, bias) comprehension of such information as well as on communication strategies--including graphic presentation--that help to counteract these dynamics.

The focus (this week & next) is primarily on presentation of risk and other forms of probabilistic information in the context of personal health-care decisionmaking. 

But someone did happen to show me this climate-change risk graphic from and ask me if I thought it would be effective.  

I passed it on to the students in the class and asked them to answer the question based on several alternative assumptions about the messenger, audience, and goal of the communication. 

a.    A climate change advocacy group, which is considering whether to include the graphic in a USA Today advertisement in hope of generating public support for carbon tax. 

b.    Freelance author considering submitting an article to Mother Jones magazine. 

c.     Freelance author considering submitting an article to the Weekly Standard. 

d.    A local municipal official presenting information to citizens in a coastal state who will be voting on a referendum to authorize a government-bond issuance to finance adaptation-related infrastructure improvements (e.g., building sea-walls and storm surge gates, moving coastal roads inland). 

e.    The author of an article to be submitted for peer review in a scholarly “public policy” journal. 

f.     A teacher of a high school "current affairs" class who is considering distributing the graphic to students.

Curious what you all think, too. (If you can't make it out on your screen, click on it, and then click again on the graphic on the page to which you are directed.)


The relationship of LR ≠1J concept to "adversarial collaboration" & "replication" initiatives

So some interesting off-line responses to my post on the proposed journal LR ≠1J.  

Some commentators mentioned pre-study registration of designs. I agree that's a great practice, and while I mentioned it in my original post I should have linked to the most ambitious program, Open Science Framework, which integrates pre-study design registration into a host of additional repositories aimed at supplementing publication as the focus for exchange of knowledge among researchers.

Others focused on efforts to promote more receptivity to replication studies--another great idea. Indeed, I learned about a really great pre-study design registration program administered by Perspectives on Psychological Science, which commits to publishing results of "approved" replication designs. Social Psychology and Frontiers on Cognition are both dedicating special issues to this approach. 

Finally, a number of folks have called my attention to the practice of "adversary collaboration" (AC), which I didn't discuss at all.

AC consists of a study designed by scholars to test their competing hypotheses relating to some phenomenon. Both Phil Tetlock & Gregory Mitchell (working together, and not as adversaries) and  Daniel Kahneman have advocated this idea. Indeed, Kahneman has modeled it by engaging in it himself.  Moreover, at least a couple of excellent journals, including Judgement and Decision Making and Perspectives on Psychological Science, have made it clear that they are interested in promoting AC.

AC obviously has the same core objective as LR ≠1J. My sense, though, is that it hasn't generated much activity, in part because "adversaries" are not inclined to work together. This is what one of my correspondents, who is very involved in overcoming various undesirable consequences associated with the existing review process, reports.

It also seems to be what Tetlock & Mitchell have experienced as they have tried to entice others whose work they disagree with to collaborate with them in what I'd call "likelihood ratio ≠1"  studies. See, e.g. Tetlock, P.E. & Mitchell, G. Adversarial collaboration aborted but our offer still stands. Research in Organizational Behavior 29, 77-79 (2009).

LR ≠1J would systematize and magnify the effect of AC and in a way that avoids the predictable reluctance of "adversaries" -- those who have a stake in competing hypotheses-- from collaborating.

As I indicated LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence -- regardless of the actual results -- that warrant revising assessments of the relative likelihood of competing hypotheses.  The journal would then (3) fund the study, and finally, (4) publish the results.

This procedure would generate the same benefits as "adversary collaboration" but without insisting that adversaries collaborate.

It would also create an incentive -- study funding -- for advance registration of designs.

And finally, by publishing regardless of result, it would avoid even the residual "file drawer" bias that persists under registry programs and  "adversary collaborations" that contemplate submission of completed studies only.

Tetlock & Mitchell also discuss the signal that is conveyed when one adversary refuses to collaborate with another.  Exposing that sort of defensive response was the idea I had in mind when I proposed that  LR ≠1J publish reviews of papers "rejected" because referees with opposing priors disagreed on whether the design would furnish evidence, regardless of outcome, that warrants revising estimates of the likelihood of the competing hypotheses.

As I mentioned, a number of journals are also experimenting with pre-study design registration programs that commit to publication, but only for replication studies (or so I gather--still eager to be advised of additional journals doing things along these lines).  Clearly this fills a big hole in existing professional practice.

But the LR ≠1J concept has a somehwat broader ambition. Its motivation is to try to counteract  the myriad distortions & biases associated with NHT & p < 0.05 -- a  "mindless" practice that lies at the root of many of the evils that thoughtful and concerned psychologists are now trying to combat by increasing the outlets for replication studies. Social scientists should be doing studies validly designed to test the relative likelihood of competing hypotheses & then sharing the results whatever they find. We'd learn more that way. Plus there'd be fewer fluke, goofball, "holy shit!" studies that (unsurprisingly) don't replicate

But I don't mean to be promoting LR ≠1J over the Tetlock & Mitchell/Kahneman conception of AC, over pre-study design registration, or over greater receptivity to publishing replications/nonreplications.

I would say only that it makes sense to try a variety of things -- since obviously it isn't clear what will work.  In the face of multiple plausible conjectures, one experiments rather than than debates!

Now if you point out that LR ≠1J is only a "thought experiment," I'll readily concede that, too, and acknowledge the politely muted point that others are actually doing things while I'm just musing & speculating. If there were the kind of interest (including potential funding & commitments on the part of other scholars to contribute labor), I'd certainly feel morally & emotionally impelled to contribute to it.  And in any case, I am definitely impelled to express my gratitude toward & admiration for all the thoughtful scholars who are already trying to improve the professional customs and practices that guide the search for knowledge in the social sciences.