I read a couple of interesting studies of risk and the “white male effect” recently, one by McCright and Dunlap published (advance on-line) in the Journal of Risk Research and another in working paper form by Julie Nelson, an economist at the University of Massachusetts at Boston.
The “white male effect” (WME) refers to the observed tendency of white males to be less concerned with all manner of risk than are women and minorities. The phenomenon was first observed (and the term coined) in a study by Flynn, Slovic & Mertz in 1994 and has been poked and prodded by risk-perception researchers ever since.
WME was the focus of one early Cultural Cognition Project study. Extending study findings by Finucane, Slovic, Mertz, Flynn & Satterfield, a CCP research team (which included WME veterans Slovic & Mertz!) found that WME could be largely attributed to the interaction of cultural worldviews with race and gender. The WME was not so much a “white male effect” as a “white hierarchical and individualistic male effect” reflecting the extreme risk skepticism of men with those worldviews.
The design and hypotheses of the CCP study reflected the surmise that WME was in fact a product of “identity protective cognition.” Identity-protective cognition is a species of motivated reasoning that reflects the tendency of people to form perceptions of risk and other facts that protect the status of, and their standing in, self-defining groups. White hierarchical individualistic males were motivated to resist claims of environmental and certain other risks, we conjectured, because the wide-spread acceptance of those claims would justify restrictions on markets, commerce, and industry—activities important (emotionally and psychically, as well as materially) to the status of white men with those outlooks.
The McCright and Dunlap article corroborates and strengthens this basic account of WME. Using political ideology rather than cultural worldviews to measure the latent motivating disposition, M&D find that the interaction of conservatism with race and gender explains a wide range of environmental risks (thus enlarging on an earlier study of their own, too, in which they focused on climate change).
M&D suggest that WME can be seen as being generated jointly by identity-protective cognition and “system justification,” a psychological dynamic that is said to generate attitudes and beliefs supportive of the political “status quo.” They defend this claim convincingly with the evidence that they collected. But I myself would be interested to see a study that tried to pit these two mechanisms against each other, since I think they are in fact not one and the same and could well be seen as rival accounts of many phenomena featuring public controversy over risks and related policy-consequential facts.
Nelson’s paper presents a comprehensive literature review and re-analysis of various studies—not just from the field of risk perception but from economics and decision theory, too—purporting to find greater “risk aversion” among women than men.
Actually, she pretty much demolishes this claim. The idea that gender has some generic effect on risk perception, she shows, is inconsistent with the disparity in the size of the effects reported across various settings. Even more important, it doesn’t persist in the fact of experimental manipulations that are more in keeping with explanations based on a variety of context-specific or culturally grounded dynamics (such as stereotype threat).
Nelson hints that the ubiquity of the “female risk aversion” claim in economics might well reflect the influence of a culturally grounded expectation or prototype on the part of researchers and reviewers—an argument that she in fact explicitly (ruthlessly!) develops in a companion essay to her study.
I got so excited by the papers that I felt like I had to so some data analysis of my own using responses from a nationally representative sample of 800 subjects who participated in a CCP study in late September.
The top figure, which reflects a regression model that includes only gender and race, shows the classic WME for climate change (the outcome variable is the “industrial strength risk perception” measure, which I’ve normalized via z-score).
The bottom figure graphs the outcome once the worldview measures and appropriate race/gender/cultural interaction terms are added. It reveals that WME is in truth a “white hierarchical individualistic male effect”: once the intense risk skepticism associated with being a white, hierarchical individualistic male is taken into account, there’s no meaningful gender or race variance in climate change risk perceptions to speak of.
For fun (and because the risk perception battery in the study also had this item in it), I also ran a model for “the risk that high tax rates for businesses poses to human health, safety, or prosperity” in American society. Relative to the ones displayed in climate-change risk perceptions, the results are inverted:
In other words, white males are more worried about this particular risk, although again the gender-race difference is an artifact of the intensity of the perceptions of white hierarchical individualistic males.
That these characteristics predict more risk concern here is consistent with the identity-protective cognition thesis: because it burdens an activity connected to status-enhancing roles for individuals with this cultural identity, white hierarchical individualistic males can be expected to form the perception that high tax rates on business will harm society generally.
This finding also bears out Nelson’s most interesting point, in my view, since it confirms that men are more risk tolerant than women” only if some unexamined premise about what counts as a "risk" excludes from assessment the sorts of things that scare the pants off of white men (or at least hierarchical, individualistic ones).
Cool papers, cool topic!
Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).
Last weekend I attended and made a presentation at the Advanced Science & Technology Adjudication Resource Center (ASTAR).
ASTAR is an amazing concept. The goal of the program is to train a cadre of “science & technology resource” judges with the knowledge needed to preside over cases involving highly complex scientific issues.
Prospective ASTAR resource judges are awarded training scholarships after being nominated by the judiciaries in their state (or by one of the two participating federal courts). They then must complete 120-hours of training, including 60 hours of participation in regularly convened sessions that focus on one or another specific area of science. Once they get through that—if, really; there’s a “Ranger School” aspect to this—they are deemed ASTAR Fellows, and play an active role in the conduct of the program in addition to serving as their jurisdictions’ “resource judges.”
As impressive as all this sounds, seeing it in action is even more awe-inspiring.
The topic for this session was “Management Of Complex Cases Involving Environmental Hazards.” There were 22 (I think; I lost count!) 3/4-hour sessions crammed into the weekend. Most of them involved nuclear radiation and were taught—very expertly—by scientists from the Los Alamos National Laboratory (where I got to give a talk on Monday; more on that later).
The judges were dedicated students—bombarding the lecturers with insightful questions (many of which related to the readings the judges were assigned to do before arrival).
In my session, I talked about “risk perception & science communication.” The basic message I tried to impart was that cultural cognition is something that has to be understood by those who manage any process of fact-finding, particularly one involving laypeople (or experts, for that matter, making decisions outside of their own domains of expertise).
Obviously judges fit that description, and in addition to reviewing studies that show the impact of cultural cognition on public risk perceptions generally, I also showed how the same dynamics can affect jurors’ perceptions of trial evidence. (Slides here.)
Actually, courts are in many respects way ahead of other institutions, in & out of government, in preparing themselves to play an intelligent role in managing the impact of cultural cognition on factfinding.
Judges know that valid evidence doesn’t establish its own validity or otherwise ineluctably guide people to the truth.
They know that ordinary people likely can make accurate and defensible factual determinations on matters that turn on scientific and other forms of evidence-- but only if information is presented to them in a form and (just as important) an environment suited to the faculties ordinary people use to identify who knows what about what.
They also know that how to assure information gets presented in such a manner is not something that one can just figure out by personal hunch & speculation. Fitting the presentation of scientific & other evidence to the reasoning faculties of ordinary people is a topic that admits of--indeed, demands-- scientific investigation.
Accordingly, judges (or at least the best ones, like those who are part of & support ASTAR) want to be sure they keep up with the what’s scientifically known about how to promote reliable factfinding.
That’s not the case, I pointed out to the ASTAR judges, for many other actors whose job it is to help ordinary members of the public figure out facts—ones essential to planning their financial futures, to making intelligent decisions as consumers of health care, and to making informed decisions as citizens of a self-governing society.
To illustrate this point, I told the judges about the horrendous and inexcusable science-communication misadventure surrounding the HPV vaccine. Combining our CCP study on HPV vaccine risk perceptions with media reports, I reviewed how the vaccine came to be stigmatized with the divisive cultural meanings that continue to suppress vaccination rates.
Merck polluted the science communication environment on that one. But that happened only because the FDA and CDC didn’t even know that the path the company was urging on them was one that would fill the atmosphere with disorienting partisan meanings. They didn’t know it was actually their job to make sure that didn’t happen.
And the reason they didn’t know those things, I’m sure, is that they were (and I’m worried likely remain) entirely innocent of the science of science communication.
In its monumentally important report on the state of forensic science, the National Academy of Sciences called for courts and legislators, law-enforcement agencies and universities all to combine to bring the “culture of science to law.”
The ASTAR judges are doing that.
What's more, they are doing it in a way that reflects the signature virtues of their own profession—including its insistence that lawyers and judges become familiar with expert or technical forms of knowledge essential to the performance of their own work, and that judges in particular assume responsibility for securing the conditions most conducive to informed decisionmaking in the courtroom.
Bringing these aspects of the culture of law to science would go a long way to remedying the institutional deficits in science communication that prevent our society from making full use of the vast bounty of knowledge at its disposal.
A friend & collaborator asked me,
So...could you send me a quick tip/reference on how to best graph interactions in regression? I'm just thinking of simple line-charts, comparing divergent slopes for two or three different groups after controlling for the other vars in the equation. I'm *sure* this is easily done, but I'm blanking on how. I mean, it's easy enough to draw the slope based on the unstandardized coefficient. And the Y-intercept to start that line from is...what? the B of the constant?
There are excellent papers that reflect general disatisfaction w/ how social scientists tend to graphically report (or not) the results of multivariate regression models. They include:
- Gelman, A., Pasarica, C. & Dodhia, R. Let's Practice What We Preach: Turning Tables into Graphs. Am Stat 56, 121-130 (2002).
- King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci 44, 347-361 (2000).
- Kastellec, J.P. & Leoni, E.L. Using Graphs Instead of Tables in Political Science. Perspectives on Politics 5, 755-771 (2007).
I'll show you some examples below but here are some general tips I'd give:
a. *don't* graph data after splitting sample (e.g., into "high," "medium" & "low" in political sophistication)... Graph the results of the model that includes all the relevant predictors & cross-product interaction terms as applied to the entire sample; those are the results you are trying to display & splitting sample will change/bias the parameter estimates.
b. consider z-score normalization for the outcome variable: you won't have to worry about the intercept (it should be zero, of course), you'll avoid lots of meaningless "white space" if values within of +/-1 or +/-2 SDs (the end points for y-axis) are concentrated within a middling portion of the outcome measure. Also for most readers, reporting the impact in terms of SDs of the outcome variable will be more intelligible than differences in raw units of some arbitrary scale (the sort you'd get by summing the likert items to form a composite likert scale, e.g.)
c. rather than graphing *slopes*, consider plotting regression estimates based on sensibly contrasting values for the predictors (and corresponding values for the cross-product interaction term); the "practical effect" of the interaction is likely to be easier to grasp that way than comparison of visual differences in slopes
d. if you are using OLS to model responses to a likert item, consider using ordered logit instead -- maybe you should be doing this anyway, but in any case, probabilities of responding at particular level (or maybe range of levels; say "agree either slight, moderately or strongly vs disagres slighly, modreately, or strongly") conditional on levels of predictor & moderator are graphically more intelligible than estimated values on an arbitrary continuous scale.
e. consider graphing estimated *differences* (& corresponding CIs) in the outcome variable at different levels of moderator; e.g, if difference increases between subjects who are from different groups (or who vary +/- 1 SD on some continuous predictor) conditional on whether the value of some continuous moderator, then use bar graph w/ CIs or some such to show how much greater the estimated difference between the two groups is at the two levels of the moderator
f. consider monte carlo simulation of estimated impact of contrasting sets of predictors & moderators (& associated interactions); do kernel-density plots for 1,000 or 2,000 values of each -- it's a *really* good way to show both the contrast in the estimates & the precision of the estimates (much better than standard CIs). See King et al. above
g. usually prefer connected lines to bar graphs to display contrasts; former are more comprehensible
h. in general, don't use standardized regression coefficients but do center continuous predictors (or convert them to z-scores) so that people who are reading the table can more readily interpret them
Have attached [reproduced below] a bunch of CCP study examples that reflect one or another of these strategies or related ones. BTW, of course, all of these reflect things that I learned to do from collaborating w/ Don [Braman], who like all great teachers teaches people how to teach themselves.
note: all examples below are clickable thumbnails that expand to larger size for closer inspection
Check out the religiosity & science comprehension intereactions here, too.
Judge Mark Kravitz, of the Federal District Court for the District of Connecticut died yesterday from Lou Gehrig’s disease. He was 62.
In my 2011 Harvard Law Review Foreword, I described a style of judicial reasoning and opinion writing that I characterized as “aporetic.”
Aporia is an ancient Greek term referring to a mode of argumentative engagement that evinces comprehension of an issue’s ineradicable complexity.
Aporia is not a state of uncertainty or equivocation (indeed, it’s not really anything that can be described by a single English word I can think of). One can reach a definitive conclusion about a problem and still be aporetic in assessing it.
But if one adopts a position that denies or purports to dispel the difficulty that a truly difficult issue poses, or that fails to recognize the undeniable weight of the opposing considerations on either side, then one isn’t being aporetic. Indeed, in that case, one is actually not getting the issue at hand, no matter how one resolves it. The effacement of real complexity signifies a deficiency in intellectual character.
Judicial reasoning—of the sort that is expressed openly in court opinions—tends not to be aporetic. Of course, most of the issues that courts resolve are not fraught with complexity. But even in those that really are, judges tend to effect a posture of unqualified, untroubled confidence.
This form of comic overstatement is most conspicuous in Supreme Court opinions. Every relevant source of guidance (text, purpose, precedent, policy, tradition, “common sense” etc.) indisputably, undeniably converges on a single conclusion, the Justices emphatically insist. We are supposed to believe this even though the Court’s primary criterion for review is the existence of an issue that has divided lower courts, and even though the Justices themselves often disagree about which outcome in a particular case is supported indisputably, undeniably by every conceivable consideration.
But actually, there’s nothing funny about such puffing. On the contrary, it’s disturbing.
Hyperbolic certitude diminishes the legitimacy of the law by conveying to those who are disappointed by the outcome of a case that that the judge who decided it was biased, and intent on deception.
It also denigrates reason. It embodies in the law an attitude that breeds cynicism and dulls reflection.
In my Foreword, I defended an alternative, aporetic idiom of judicial reasoning that recognizes rather than effaces genuine complexity. Aporia in judicial reasoning, I argued, should be seen as a judicial virtue—because in fact it is. Being able to see complexity and being moved to engage it openly are character dispositions, and they conduce to being a good judge. A judge who is committed to being just will experience aporia when he or she must decide a genuinely complex case; and by resort to aporetic reasoning in his or her opinion, that judge assures citizens generally that their rights are being determined by someone committed to judging impartially.
Mark Kravitz had this virtue. In fact, for me, he was and remains the model of it. Before I had occasion to observe him as a judge, I had (despite many years studying and practicing law) only a dim, inchoate sense of judicial aporia; when I try to make the picture as vivid and compelling for others as it now is for me, I try to describe Mark Kravitz.
Last April, Judge Kravitz decided a case—one of his last—in which members of the Occupy protest movement brought a suit to try to halt the imminent, forcible removal of their tent city from the New Haven Green. He denied their motion for an injunction.
No one can read his opinion, though, and escape the conclusion that the issues it presented were difficult. Indeed, in a tone that was rare in his opinions, Judge Kravitz expressed anger at the city’s attorneys for attempting to avoid—and thus for seeking to tempt the court to avoid—acknowledging the seriousness of the Occupy protestors’ position. Dismissing the city attorneys’ argument that the protestors’ encampment did not qualify as “speech” protected by the First Amendment, the judge wrote: “One would have to have lived in a bubble for the past year to accept Defendants' claim that Occupy's tents ‘could simply mean that the plaintiffs enjoy camping.’ ”
The Occupy movement, in New Haven as elsewhere, aims to exemplify its message: to express the desire that the economically disenfranchised become more central to American public life by literally placing the economically disenfranchised in the center of America's public spaces. Defendants need not deny the obvious political expressivity of this act in order to argue that reasonable limits on acts like this may still be necessary and appropriate.
The protestors deserved an opinion that acknowledged their dignity and public spirit. As disappointed, moreover, as they no doubt were to lose the case, I suspect they likely will be able to make good use of the portion of the opinion I’ve quoted (likely they will see the value, e.g., of including it in a demonstration-permit application, something the New Haven protestors denied the authority of the City to require as a condition of convening a protest on the Green).
Those inclined to distrust the City deserved to know that its stated reasons for ending the protest were being scrutinized by a decisionmaker intent on being fair. They got that, too, from the quoted language, and from the numerous points in the opinion that acknowledged the force and seriousness of the protestors’ arguments even in the course of deciding against them....
We all deserve judges who are unafraid to see, and unafraid to tell us they see, genuine complexity. We have one less judge of this character today than we had yesterday. But by furnishing us such a clear and inspiring picture of what this judicial virtue looks like, Mark Kravitz gave us a resource we can use to assure that there are many, many more aporetic judges in the future than we have ever had in the past.
Giving lecture today at Hampshire College. Here's the summary:
Culture, Rationality, and Risk Perception: the Tragedy of the Science-Communication Commons
From climate change to the HPV vaccine to gun control, public controversy over the nature of policy-relevant science is today a conspicuous feature of democratic politics in America. A common view attributes this phenomenon to the public’s limited comprehension of science, and to its resulting vulnerability to manipulation by economically motivated purveyors of misinformation. In my talk, I will offer an alternative account. The problem, I will suggest, is not a deficit in rationality but a conflict between what’s rational at the individual and collective levels: ordinary members of the public face strong incentives – social, psychological, and economic – to conform their personal beliefs about societal risk to the positions that predominate within their cultural groups; yet when members of diverse cultural groups all form their perceptions of risk in this fashion, democratic institutions are less likely to converge on scientifically informed policies essential to the welfare of all. I will discuss empirical evidence that supports this analysis--and that suggests potential strategies for securing the collective good associated with a science communication environment free of the conflict between knowing what is known and being who we are.
The talk will feature data from our study of science-comprehension and cultural polarization on climate change and our experimental examination of how using geoengineering as a framing device can promote more open-minded engagement with climate science.
I haven't had a chance to read this really interesting-looking book yet (I just ordered it) but I find the simple existence of it fascinating.
As a national political issue, the issue of global climate change has much more relevance to people as a focus for conveying to themselves & others that they belong to a certain cultural "team" than it does as any sort of thing that might affect their or anyone else's health, welfare, etc. (now or in the future). As individual consumers, voters, advocates, etc., ordinary members of the public just don't matter enough to have any effect on the risks posed by global climate change (or by ill-considered responses to it). On the other hand, how their beliefs relate to those of others in their community matters a ton. We judge people's characters by the positions they take on whether climate change is a "a global crisis" or a "massive hoax." Being out of sync with those on whom we depend for support -- materially, emotionally, psychically -- can be devestating. So people tend to extract from the "evidence" on climate change the information that really matters: what is someone like me supposed to think.
But this sort of dynamic is peculiar, really, to the framing of "climate change" as a national or global policy issue. When people engage issues of climate-change impact in other settings, the consequences and meanings can be very different.
This is a point I have been stressing recently in advocating more attention to political decisionmaking surrounding local adaptation (here & here, e.g.), where people engage the issue as property owners or scare-resource consumers, where they people they are engaging are their neighbors, and where the language they have for sorting these issues out fits comfortably with their cultural identities. Those are conditions much more hospitable to open-minded, constructive engagement with climate science.
Well, the "business risk" setting is another that has advantages like these. Here people are again engaging the issue not as a symbolic one of significance to their identities as members of tribes or teams but as a financial one that could affect them in their capacity as economic actors. Here too there is a language for addressing the matter that all interested parties share and that doesn't evince hostility to or contempt for the identities of some.
What's more, the very appearance of this sort of engagement with the issue taking place might arguably be expected to have a positive impact on discussion of climate science in other places. It's tangible evidence that people w/ a dollar-and-cents stake & not just a political-ideological one are taking the science very seriously. That in itself supplies a resoruces that can be used, I think, to help counteract the suspicion and distrust that have poisoned the discussion environment at the national level.
Having said all this, I do think the length of the hair of the guy on the book cover might arouse suspicion in the minds of typical hierarchical individualists...
What should we teach kids (& others) about cultural conflict over science? And should science education aim to "overcome" cultural cognition?
I got into an interesting email exchange with my friend Mark McCaffrey, the Programs and Policy Director at the National Center for Science Education. (One of the many things for which I'm grateful to Mark for doing is disabusing anyone of the misconception that our Nature Climate Change study on science literacy and cultural polarization implies that science literacy is irrelevant to enlightened democratic decisionmaking.)
In the course of the correspondence, Mark noted
[I]n our arena of education, one of the top three issues for teachers in "what's going to happen and what can we do about it locally?", along with "how do scientists know what they know?" and "how do I deal with controversy in the classroom about climate change?" Bringing local context (geography and culture) obviously is imperative.
He also stated,
I'm curious, in part due to a conversation [after the recent University of Colorado conference on culture and climate change], on what role education has in shaping and helping transcend cultural frames.
As Mark’s points often do, these ones provoked a chain of reflections on my part, which included these:
A. What to teach kids (and other curious people) about the nature of cultural conflict over science
The report on the experience & interest of the teachers is fascinating. It makes me think of what a climate scientist told me recently. He reported that when he talks to members of the public including student audiences, one thing they want to know is why there is such much controversy; that's not what they expect to see, not what they associate with scientific understanding of an issue & they find it mysterious & puzzling.
I had two reactions:
(1) It's amazing -- inspiring, even -- to see that citizens are curious about this phenomenon, that they want to understand it; they (or some at least) notice and have the same reaction to this peculiar social phenomenon that they (or some) have to an intricate or surprising natural one. That sensibility is one of the most distinctive and admirable characteristics of our species; the commitment to giving people the resources to satisfy this sort of interest -- the education in science, certainly, to be able to comprehend the sort of knowledge that exists but also ready & reliable access to whatever knowledge has been amassed -- is one of the signatures qualities of a good society.
If in fact people -- including high school students (or maybe even younger ones) -- have this reaction to public conflict over science, then I think it would be very very worthwhile to figure out how to give them the resources a curious and intelligent citizen could use to participate in whatever collective knowledge we have about it. Certainly, I'd be happy to give advice to any science educator who thought this was a worthy objective. That's not the sort of education I do, really, but if someone who does do it wanted to have someone to work with who could try to help him or her identify what to try to make comprehensible to people, I'd be delighted to help.
(2) When I heard this report -- of the citizens (including, again, high school students) who were confused about this issue, it made me think too that the chance to answer the question is itself a sort of civic opportunity to contribute to a climate for discussion that itself helps, if not to dispel the confusion, at least to ameliorate the negative impact it has on common engagement with contested science issues.
In response to the scientist who reported on the curiosity of the public to make sense of the climate-change controversy, another person in the conversation noted that there are people strongly committed to misleading members of the public and who were filling the media with misinformation.
I don't deny that but I don't think it is the aspect of the problem that it is most important or useful for these curious people to understand. What is is that the conflict about climate change is the signature of a kind of degradation of the science communication environment the quality of which is essential to the interest we all have in being able reliably to ascertain what is collectively known.
There will always be more that is collectively known (through science) than we can meaningfully comprehend (life is short, and the world complex enough to demand specialization). As a result, we have to make use of our ability to identify and properly interpret the signs around us about who knows what about what.
Ordinarily we are great at that; but sometimes something goes wrong -- maybe from deliberate efforts to confuse but often simply as a result of misadventure and miscalculation -- that creates conflict and chaos in those signs. That sort of state is something that inevitably confuses all of us; it is something we are all vulnerable to; and it is something the avoidance of which is critical to our common interests--however we feel about climate change, and however we feel about moral and social issues of the sort that inevitably divide people who enjoy freedom of thought.
We have to try to figure out how to respond to climate change as natural phenomenon, and as an issue that divides us. But we need to think more generally about what we can do to protect our science communication environment from the sort of contamination that accounts for this peculiar and pernicious form of conflict over what we know....
B. Does knowing what is known by science require teaching students to "overcome" cultural cognition?
On overcoming cultural frames with education ... My reaction is "yes & no."
Yes in the sense that I think the sort of influences associated with cultural cognition are not "all there is" -- by any means! -- to engaging scientific information, and are qualified in particular by "professional habits of mind." That is, I think part of the nature of professionalization is that it imbues in those who are subject to it a set of conceptual frameworks, a collection of reasoning skills, and also a cluster of dispositions (some reflective & conscious but others more or less automatic and even emotional) that help them reliably to engage information in the manner suited to accomplishment of the expert reasoning task at hand.
This is so for scientists; but it is true for doctors, lawyers, journalists etc. These habits of mind will usually steer professionals away from the sorts errors they might make were they to engage information through the mechanisms distinctive of cultural cognition.
But I don't think that it is feasible for everyone to attain the professional habits of mind of the expert with respect to every domain in which they will need to participate in or have access to expert knowledge. Even those who have experts' professional habits of mind in one domain will need to make use of information outside of it, in ones in which the habits of mind most suited to engaging information are different from the ones they use in theirs.
Here, then, is where I come to the "no" part of the answer. Because we must in fact participate in or apprehend what is known in domains in which we lack the substantive knowledge and habits of mind distinctive of those who produce it, we will -- all of us-- need to exercise a distinct faculty suited to ascertaining what is collectively known (one that often involves being able to identify who knows what they are talking about).
This is conjecture on my part but I am of the view that the dynamics associated with cultural cognition are integral to the operation of that faculty. We figure out what is known by accessing cues of certification that are native to affinity groups within which we are comfortable and socially competent. The groups are diverse (how could they not be in a pluralistic society?); but they are all generally *reliable* in guiding their members to an accurate understanding of what is collectively known -- by science and by other expert ways of knowing (which groups could possibly persist that failed to put their members in touch with such knowledge, which is critical, in fact, to individual well-being).
So "cultural frames" are not something to be overcome in the interest of making us able to know things; they are vital pieces of equipment that we need in order to participate in what is collectively known. The most one could do, I suppose, is replace them with something else -- but the other thing would not be professional habits of mind, since those will always be out of reach for most and in any case domain specific -- but rather some other regime of social certification.
I don't see cultural cognition as a bias, or even as a "heuristic." It is an intrinsic component of human rationality. But its reliable operation presupposes certain conditions -- what I would characterize, have characterized already in this msg, as an uncontaminated or clean "science communication environment." The goal is not to "overcome culture"; it is to protect the conditions in which culture can make the valuable -- and amazing -- contribution that it does to our being rational beings capable of acquiring knowledge through aggregated, cumulative inquiry into the workings of nature.
Mark responded, predictably and characeristically, with additional thoughtful comments relating to whether these sorts of ideas (which I think he himself might qualify or revise; he is the one with the professional habits of mind suited to educating people, including science educators) might be turned into concrete directives and materials relevant to science education. That would be fantastic in my view. I'd certainly be willing to help him or other science-education experts explore this possibility!
I got a thoughtful email from a natural scientist who said that he and some colleagues had been discussing the Sunstein NYT op-ed as well as the reflections I posted on it the other day and had some questions:
I apologize in advance if my questions are too basic or clumsy, but I’m a little out of comfort zone as a physical scientist. What you’ve described in various places as to what’s driving polarization makes perfect sense to me, however, the primary question I have is if this is somehow a uniquely American phenomenon. The reason I ask is with exceptions of course the rest of the world as I’ve been told and witnessed at times does not experience the “questioning of the science” to the degree we seem to enjoy. I’m approaching it through the lens of climate, but it may be true of other contentious scientific issues as well. In your sampling in various studies have they been international samples or just American? So is this effect somehow tied to our current American societal system or is it more general for all of humanity? And has this effect been increasing or becoming more pronounced and moving into new spheres of science over time? I know some of the history of previous “debates” such as evolution and cigarette smoke, but is it becoming more pervasive? This is really a fascinating, yet crucially important topic for me. And frankly it’s been humbling as a scientist that my word is not sacrosanct and a small business owner or a minister may actually be a more effective communicator of the science than I am.
Here is my response:
Nothing at all simplistic about your questions! I'll try my best to answer...
1. The science of science communication is large, diverse, growing, and provisional. First point to realize is that there is a pretty decent-sized literature on science communication & public risk perception. It's impossible -- in a good way -- to be able to advance concrete points w/o making judgments about what findings strike one as the most supported or the most pertinent to the issue at hand. I'll do that in responding to your inquiries. But I don't want, in doing that, to give the impression that "this is all there is to say" or "any other response has got to be wrong" etc. Actually, it's clear to me that you are already familiar with good portions of this work, so this sort of boilerplate proviso is likely completely unncessary here; but I do feel it's important to recognize both that there are lots of live conjectures & hypotheses in play, and also lots of hard-working, smart empirical researchers whose work is well worth consulting!
2. The climate-change conflict is not a singular phenomenon. Ok... It's understandable when viewing the phenomenon "through the lens of climate" to form the impression that the sort of conflict we see over climate-change is singular in all kinds of ways-- that it applies only to that issue, e.g., or is "strange US thing," or reflects "new & emerging skepticism about science." I actually don't think any of these things is true, and that that's why it is important to widen the lens, as it were.
3. The emergence of the study of public risk perceptions and science communication -- over three decades ago! The study of disconnects between public opinion on science on environment and technological risks has been around for at least 35 yrs. Moreover, the early impetus for it was the public's resistance to the predominant -- I think fair to say "consensus" view -- among scientists that nuclear power risks (particular storage in deep geologic isolation) involved low risks fully amenable to effective regulation. Paul Slovic, Bernard Fischoff & others formulated the "psychometric theory" of risk, which emphasized various dynamics neglected by the then-prevailing frameworks in decision science -- from cognitive biases of one sort or another, to distinctive qualitative valuations of risk that are independent of the sorts of things that figure in policymaking "cost-benefit" analysis. E.g., Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S. & Combs, B. How Safe Is Safe Enough? A Psychometric Study of Attitudes Toward Technological Risks and Benefits. Policy Sci. 9, 127 (1978); Slovic, P., Fischhoff, B. & Lictenstein, S. Facts versus Fears, in Judgment Under Uncertainty: Heuristics and Biases. (eds. D. Kahaneman, P. Slovic & A. Tversky). pp. 163-78; Slovic, P. Perception of risk. Science 236, 280-285 (1987). This work also looked at public concerns over risks involving food attitives, water pollution, air pollution and the like.
4. Cultural theory and cultural cognition. The cultural theory of risk, associated with Mary Douglas & Aaron Wildavsky (Douglas, M. & Wildavsky, A.B. Risk and culture: An essay on the selection of technical and environmental dangers. (University of California Press, Berkeley; 1982)) dates from the nuclear and clean-air debates, too. It was at that time an alternative to the psychometric theory. But "cultural cognition theory," with which Slovic has been prominently involved, essentially marries the two. See Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012). The idea is that the mechanisms featured in the psychometric theory can help to fill in why there are the sorts of relationships that Douglas posits between cultural outlooks and risk perceptions; in addition, Douglas's framework, which emphpasizes systematic differences in perception of risk and conflict over them between groups, furnishes a basis for undrestanding how one and the same set of mechnisms form the psychometric theory can produce division and controversy in public debates.
5. Cross-cultural cultural cognition. These dynamics are *not* confined to the U.S. There have been plenty of studies using methods associated with the cultural theory of risk to examine conflicts over risk perception in Europe. Recently, the Cultural Cognition Project research group has been using its measures to examine conflicts over climate change in other countries, including Australia and the UK. There is also recent work emerging in Canada using measures similar to ours (I'm going to post a blog essay on this soon).
6. We aren't culturally divided over the value of what scientists have to say; we are divided over what scientists are saying. It is also a mistake, in my view, to associate any of these dynamics with skepticism about or hostility toward science. In the US, in particular-- as likely you know--, there is widespread public confidence and trust in scientists. Members of the public who are culturally divided over risks like climate change & nuclear power are not divided over whether scientific consensus should be normative for risk regulation and policymaking; they are divided over what scientific consensus is. This happens becuase determining what "most scientists believe" is not something any more amenable to direct observation for ordinary people than melting glaciers; people have to get the information by observing who is saying what in public discussion, and in that process, all the mechanisms that push groups apart are going to skew impressions of what the truth is about the state of scientific opinion.
CCP has studied this very issue, finding that groups divided over climate change process evidence of scientific consensus on that issue and various others in biased ways and thus form systematically opposed, and very unreliable, perceptions of what the state of scientific consensus is. See Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011). In my view, the perception that "climate skeptics are anti-science" is itself a product of culturally motivated reasoning, and the persistence of this view distracts us from addressing the real issue and likely even magnifies the problem by needlessly insulting a large segment of the public.
7. The existing science of science communication doen't tell us what to do; rather it furnishes us with models and methods that we can and must use to figure that out. On "what to do": The literature is filled with potentially helpful strategies. But I really think that it's a mistake to think that it's useful to just sift through the literature & boil it down into lists of "dos & don'ts," e.g, "use ideologically/cuturally congenial communicators!"; "know your audience!"; "use vivid images to get attention but beware vivid images b/c they scare people & numb them!" This is a mistake, first, because in fact these sorts of admonitions can easily cause people to blunder -- e.g., to make a ham-handed effort to line up some sock-puppet advocate, whose appearance in the debate is such an obvious set up that it drives people the wrong way.
It's also a mistake b/c the "dos & don'ts," even when they are exactly right, are just too general to be of real use. They reflect conclusions drawn from studies that are highly stylized and aimed at figuring out the real mechanisms of communication. That sort of work is really important -- b/c if you don't start out with the mechanisms of consequence, you'll get nowhere. But they don't in themselves tell you want to do in any particular situation b/c they are too general, too (deliberately) remote from the details of particular communication environments.
In other words, they are *models* that those who *are* involved in communication, and who know all about the particulars of the situation they are involved in, should be guided by as they come up with strategies fitted to their communication needs. And when they do that-- when they try to reproduce in their real world settings the effects that social scientists captured in their laboratory models-- the social scientists should be there to help them test their conjectures by observing and measuring and by collecting information. They should also collect information on how that particular field experiment in science communication worked and memoralize it and share it w/ other communicators -- for whom it will be another, even richer model of what to try in their own particular situation (where again, they should use evidence-based appraoches)...
You see what I'm getting at, I'm sure. What I've just described, btw, is the process by which medicine made the transition form an experienced based craft to a science-, evidence-based one. That's got to be the next step in the science of science communication. I really urge scientists, science communicators, and scholars of science communication to take this step; and I'm happy to contribute in any way I can!
I’ve been MIA for a while – but with an emphasis on “IA.” Over the last couple of weeks I’ve had the chance, in a variety of public & semi-public fora, to advocate making local, adaptation-focused political decisionmaking the focus of evidence-based science communication initiatives.
That setting, I believe, offers tremendous opportunities for simultaneously using the science of science communication to promote enlightented self-government and acquiring even more scientific knowledge about how science communication works in democratic societies.
At the same time, the cost of failing to “go local, and bringing our empirical toolkit” could be substantial.
To recap, here’s the core argument: Essentially every one of the pollutants that make the science communication environment toxic for engaged public deliberations at the national level are absent or largely attenuated at the local.
At the national level, positions on climate change have become indelibly suffused with meanings distinctive of rival cultural factions.
The language in which competing positions are advanced—the pious scolding of the (unworldly, vulgar) members of the public who care more about the fate of “Paris Hilton and Anna Nicole Smith” than about the fate of the planet; the denunciation of climate scientists as agents of “global socialism” and enemies of “global free markets”—corroborates that the climate debate is “in reality,” just as its protagonists insist, “a struggle for the soul of America.”
Because being out of step with one’s cultural group in struggles over the nation’s "soul" can carry devastating personal consequences, and because nothing a person believes or does as an individual voter or consumer can affect the risks that climate change pose for him or anyone else, it is perfectly predictable—perfectly rational even—for people to engage the issue of climate change as a purely symbolic or expressive issue.
In contrast, from Florida to Arizona, from New York to Colorado and California, ongoing political deliberations over adaptation are affecting people not as members of warring cultural factions but as property owners, resource consumers, insurance policy holders, and tax payers—identities they all share. The people who are furnishing them with pertinent scientific evidence about the risks they face and how to abate them are not the national representatives of competing political brands but rather their municipal representatives, their neighbors, and even their local utility companies.
What’s more, the sorts of issues they are addressing—damage to property and infrastructure from flooding, reduced access to scarce water supplies, diminished farming yields as a result of drought—are matters they deal with all the time. They are the issues they have always dealt with as members of the regions in which they live; they have a natural shared vocabulary for thinking and talking about these issues, the use of which reinforces their sense of linked fate and reassures them they are working with others whose interests are aligned with theirs.
In this communication environment, people of diverse values are much more likely to converge on, rather than become confused about, the scientific evidence most relevant to securing the welfare of all.
That’s exactly why in places like Arizona and Florida—where no one, Democrat or Republican, is campaigning for Congress or the Senate on a platform to “fix global climate change”—state officials have initiated networks of stakeholder and related decisionmaking processes aimed at addressing climate adaptation at a local level. In those deliberations, moreover, the same forms of scientific evidence that are disparaged as part of a “hoax” are central to planning on projects as diverse as the building of flood gates to the design of off-shore nuclear power facilities.
That’s an account of the opportunity that local, adaptation creates to restore the value of science as the currency of enlightened democratic decisionmaking. But it would be a huge mistake to assume that the opportunity will naturally or inevitably be taken advantage of.
Indeed, there is a considerable risk that the pollution that has contaminated the national science-communication environment will spill over and contaminate the local one as well.
We saw this happen in North Carolina, e.g., where the state legislature enacted a provision that bars use of anything but “historical data” on sea-levels in state planning. This happened because proponents of adaptation there failed to do what those in the neighboring state of Virginia were able to do in creating a rhetorical separation between the issue of local flood planning and “global climate change.” Polarizing forms of engagement have bogged down municipal planning in some parts of Florida—at the same time as progress is being made elsewhere.
Actors committed to the effective use of valid science—including municipal planners, farmers, utility companies, and conservation groups-- need science communication help and in fact are asking for it. If those interested in formulating and implementing effective “communication strategies” focus only on national public opinion, they will be effectively turning their back on these people.
At the same time, if we respond by sending them nothing more than “best practice” manuals filled with generalities (“know your audience!”; “gain attention with emotionally compelling images—but beware numbing people with emotionally compelling images!”), we’ll be offering them little that can actually help them.
By use of stylized lab studies, the science of science communication has generated critical insights about valid psychological mechanisms. Such work remains necessary and valuable.
But in order for the value associated with it to be realized, social scientists must become experts on how to translate these lab models into real, useable, successful communication strategies fitted to the particulars of real-world problems. To do that, they will have to set up labs in the field, where informed conjectures based on indispensable situation sense of local actors can form the basis for continued hypothesizing and testing.
Not only do those committed to enlightened policymaking need the science of science communication to succeed. The science of science communication needs to put itself at the disposal of those actors in order for it to continue to generate knowledge.
Many thanks to all the people who sent me emails asking if I saw Cass Sunstein's op-ed on "biased assimilation" today in NYT: they assured I didn't miss a good read!
Sunstein's basic argument is that inundating people with "balanced information" doesn't promote convergence on sound conclusions about policy because of "biased assimilation." For this, he cites (via the magic of hyperlinked text) the classic 1979 Lord, Ross & Lepper study on capital punishment.
Sunstein's proposal for counteracting this dynamic is to recruit ideologically congenial advocates to challenge people's preexisting views: "The lesson for all those who provide information," he concludes, is "[w]hat matters most may be not what is said, but who, exactly, is saying it."
Op-ed word limits and the aversion of editors to even modest subtlety make simplification inevitable. Given those constraints, what Sunstein manages in 800 words is a nice feat.
But being free of such constraints here, I'd say the growing "science of science communication" literature suggests a picture of public conflict over science that is simultaneously tighter and richer than the one Cass was able to present.
To begin, "biased assimilation" doesn't itself predict that identity-congruent messengers should be able to change minds. LR&L find only that only that people will construe information on controversial issues to reinforce what they already believe--"confirmation bias" essentially.
I believe the phenomenon at work in polarized science debates is something more general: identity-protective motivated reasoning. This refers to the tendency of people to conform their processing of information -- whether scientific evidence, policy arguments, the credibility of experts, or even what they see with their own eyes -- to conclusions that reinforce the status of, and their standing in, important social groups.
"Biased assimilation" might sometimes be involved (or appear to be involved) when identity-protective motivated reasoning is at work. But because sticking to what one believes doesn't always promote one’s status in one’s group, people will often be motivated to construe information in ways that have no relation to what they already believe.
E.g., in a study that CCP did of nanotechnology risk perceptions, we did find that individuals exposed to "balanced information" became culturally polarized relative to ones who hadn't received balanced information. But those in the "no-information" condition, most of whom knew little about nanotechnology, were not themselves culturally divided; they had priors that were random with respect to their cultural views. Thus, the subjects exposed to balanced information selectively assimilated it not to their existing beliefs but to their cultural predispositions--which were attuned to affective resonances that either threatened or affirmed their groups' way of life.
Or consider a framing experiment we did involving "geoengineering." In it, we found that individuals culturally predisposed to be dismissive toward climate-change science were much more open-minded in their assessment of such sciencewhen they were first advised that scientists were proposing research into geoengineering and not only stricter CO2 limits as a response to climate change.
Biased assimilation -- the selective crediting or discrediting of information based on one's prior beliefs -- can't explain that result, but identity-protective motivated reasoning can. The congeniality of geoengineering, which resonates with pro-technology, pro-market, pro-commerce values, reduced the psychic cost of considering information to which individuals otherwise would have attached value-threatening implications--such as restrictions on commerce, technology, and markets.
Identity-protective motivated reasoning also explains the persuasiveness of ideological congenial advocates that Sunstein alluded to at the end of his column. The group values of the advocate are a cue about what position is predominant in a person's cultural group. If that cue is strong and credible enough, then people will go with the argument of the culturally congenial advocate even if the information he is presenting is contrary to their existing beliefs.
We examined this in a study of HPV-vaccine risk perceptions. In that experiment, we found that "balanced information" did polarize subjects along lines that reflected positions (and thus existing beliefs) predominant within their cultural groups. But when arguments were attributed to "culturally identifiable experts" – fictional public health experts to whom we knew subjects would impute particular cultural values -- individuals consistently adopted the position advocated by the expert whose values they (tacitly) sensed were most like theirs.
This study only shows not only that the influence of culturally congenial experts is distinct from, and stronger than, biased assimilation. It also helps to deepen our understanding of why.
Indeed, reliable understandings of “why”-- and not merely analytical clarity--is what's at stake here. As I'm sure Cass would agree, one needs to do more than reach into the grab bag of effects and mechanisms if one wants to explain, predict, and formulate prescriptions. One has to formulate a theoretical framework that integrates the dynamics in question and supplies reliable insights into how they are likely to interact. Identity-protective cognition (of which cultural cognition is one conception or, really, operationalization) is a theory of that sort, whereas "biased assimulation" is (at most) one of the mechanisms that theory connects to others.
If I'm right (I might not be; show me the evidence that suggests an alternative view) to see identity-protective cognition as the more general and consequential dynamic in disputes about policy-relevant science, moreover, then it becomes important to identify what the operative group identities are and the means through which they affect cognition. Sunstein suggests ideological affinity is important for the credibility of advocates. Well, sure, ideological affinity is okay if one is trying to measure identity-protective motivated reasoning. But for reasons I’ve set forth previously, I’d say cultural affinity is generally better -- if we are trying to explain, predict and formulate prescriptions that improve science communication.
As for whether recruiting ideologically congenial advocates is the "lesson" for those trying to persuade "climate skeptics," that's a suggestion that I'm sure Cass would urge real-world communicators to consult Bob Inglis about before trying. Or Rick Perry and Merck.
These two cases, of course, are entirely different from one another: Inglis took a brave stance based on how he read the science, whereas Perry took a payment to become a corporate sock-puppet. But both cases illustrate that deploying culturally congenial advocates to spread counter-attitudinal messages isn't a prescription that emerges from the literature in nearly as uncomplicated a manner as Sunstein might be seen to be suggesting.
The point generalizes. It's important to to attend to the wider literature in the science of science communication because the lessons one might distill by picking out one or another study in social psychology risks colliding head on with opposing lessons that could be drawn from others examining alternative mechanisms.
Actually, I'm 100% positive Sunstein would agree with this. Again, one can't possibly be expected to address something as complex as reconciling off-setting cognitive mechanisms (here: "trust the guy with my values," on one hand, vs. "excommunicate the heretic" & the "Orwell effect, on the other) in the cramped confines of an op-ed.
Okay, enough of that. Going beyond the op-ed, I'm curious what Sunstein now thinks about the relationship between "biased assimilation" --and identity-protective motivated reasoning generally -- and Kahneman's "system 1/system 2" & like frameworks of dual process reasoning.
This was something on which a number of CCP researchers including Paul Slovic, Don Braman, John Gastil & myself, debated Cass in a lively exchange in the Harvard Law Review before he took on his post in the Obama Administration. Sunstein's position then was that cultural cognition was essentially just another member of the system 1 inventory of "cognitive biases."
But research we've done since supports the hypothesis that culturally motivated reasoning isn't an artifact of “bounded rationality,” as Sunstein puts it. On the contrary, cultural cognition recruits systematic reasoning, and as a result generates even greater polarization among people disposed to use what Kahneman calls “system 2” processing.
Indeed, in our Nature Climate Change paper, we argued that this effect reflects the contribution that identity-protective cognition makes (or can make) to individual rationality. It's in the interest of individuals to conform their positions on climate change to ones that predominate within their group: whether an individual gets the science "right" or "wrong" on climate change doesn't affect the risk that climate change poses to him or to anyone else-- nothing he does based on his beliefs has any discernable impact on the climate; but being "wrong" in relation to the view that predominates in one's group can do an individual a lot of harm, psychically, emotionally, and materially.
The heuristic mechanisms of cultural cognition (including biased assimilation, cultural-affinity credibility judgments) steer a person into conformity with his or her cultural group and thus help to make that person's life go better. And being adept at system 2 only gives such a person an even greater capacity to "home in" on & defend the view that predominates in that person's group.
Of course, when we all do this at once, we are screwed. This is what we call the "tragedy of the risk perception commons.” Fixing the problem will require a focused effort to protect the science communication environment from the sort of toxic cultural meanings that create a conflict between perceiving what is known to science and being who we are as individuals with diverse cultural styles and commitments.
I’m glad Cass is now back from his tour of public service (and grateful to him for having taken it on), because I am eager what he has to say about the issues and questions that risk-percepton scholars have been debating since he’s been gone!
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, advance on line publication, http://www.nature.com/doifinder/10.1038/nclimate1547 (2012).
Lord, C.G., Ross, L. & Lepper, M.R. Biased Assimilation and Attitude Polarization - Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology 37, 2098-2109 (1979).
Culturally polarized Australia: Cross-cultural cultural cognition, Part 3 (and a short diatribe about ugly regression outputs)
In a couple of previous posts (here & here), I have discussed the idea of "cross-cultural cultural cognition" (C4) in general and in connection with data collected in the U.K. in particular. In this one, I'll give a glimpse of some cultural cognition data from Australia.
The data come from a survey of large, diverse general population sample. It was administered by a team of social scientists led by Steven Hatfield-Dodds, a researcher at the Australian National University. I consulted with the Hatfield-Dodds team on adaptation of the cultural cognition measures for use with Australian survey respondents.
It was a pretty easy job! Although we experimented with versions of various items from the "long form" cultural cognition battery, and with a diverse set of items distinct from those, the best performing set consisted of the two six-item sets that make up the "short form" versions of the CC scales. The items were reworded in a couple of minor ways to conform to Australian idioms.
Scale performance was pretty good. The items loaded appropriately on two distinct factors corresponding to "hierarchy-egalitarianism" and "individualism-communitarianism," which had decent scale-reliability scores. I discussed these elements of scale performance more in the first couple of posts in the C4 series.
The Hatfield-Dodds team included the CC scales in a wide-ranging survey of beliefs about and attitudes toward various aspects of climate change. Based on the results, I think it's fair to say that Australia is at least as culturally polarized as the U.S.
The complexion of the cultural division is the same there as here. People whose values are more egalitarian and communitarian tend to see the risk of climate change as high, while those whose values are more hierarchical and individualistic see it as low. This figure reflects the size of the difference as measured on a "climate change risk" scale that was formed by aggregating five separate survey items (Cronbach’s α = 0.90):
Looking at individual items helps to illustrate the meaning of this sort of division -- its magnitude, the sorts of issues it comprehends, etc.
Asked whether they "believe in climate change," e.g., about 50% of the sample said "yes." Sounds like Australians are ambivalent, right? Well, in fact, most of them are pretty sure -- they just aren't, culturally speaking, of one mind. There's about an 80% chance that a "typical" egalitarian communitarian," e.g., will say that climate change is definitely happening; the likelihood that a hierarchical individualist will, in contrast, is closer to 20%.
There's about a 25% chance the hierarchical individualist will instead say, "NO!" in response to this same question. There's only a 1% chance that an egalitarian communitarian in Australia will give that response!
BTW, to formulate these estimates, I fit a multinomial logistic regression model to the responses for the entire sample, and then used the parameter estimates (the logit coefficients and the standard errors) to run Monte Carlo simulations for the indicated "culture types." You can think of the simulation as creating 1,000 "hierarch individualists" and 1,000 "egalitarian communitarians" and asking them what the they think. By plotting these simulated values, anyone, literally, can see, literally, the estimated means and the precision of those estimates associated with the logit model. No one -- not even someone well versed in statistics -- can see such a result like in a bare regression output like this:
Yet this sort of table is exactly the kind of uninformative reporting that most social scientists (particularly economists) use, and use exclusively. There's no friggin' excuse, for this, either, given that public-spirited stats geniuses like Gary King have not only been lambasting this practice for years, but also producing free high-quality software like Clarify, which is what I used to run the Monte Carlo simulations here (the graphic reporting technique I used--plotting the density distributions of the simulated values to illustrate the size and precision of contrasting estimates--is something I learned from King's work too).
So don't be awed the next time someone puts a mindless table like this in a paper or on a powerpoint slide; complain!
Oh .... There are tons of cool things in the Hatfield-Dodds et al. survey, and I'm sure we'll write them all up in the near future. But for now here's one more result from the Australia C4 study:
Around 20% of the survey respondents indicated that climate change was caused either "entirely" or "mainly" by "nature" rather than by "human activity." But the likelihood that a typical hierarchical individualist would view climate change was around 48% (+/-, oh, 7% at 0.95 confidence, by the looks of the graphic). Only about 5% chance an egalitarian communitarian would treat humans as an unimportant contributor to climate change.
You might wonder how about 50% of the hierarch individualists one might find in Australia would likely tell you that "nature" is causing climate change when less than 25% are likely to say "yes" if you ask them whether climate change is happening.
But you really shouldn't. You see, the answers people give to individual questions on a survey on climate change aren't really answers to those questions. They are just expressions of a global pro-con attitude toward the issue. Psychometrically, the answers are observable "indicators" of a "latent" variable. As I've explained before, in these situations, it's useful to ask a bunch of different questions and aggregate them-- the resulting scale (which will be one or another way of measuring the covariance of the responses) will be a more reliable (i.e., less noisy) measure of the latent attitude than any one item. Although if you are in a pinch -- and don't want to spend a lot of money or time asking questions -- just one item, "the industrial strength risk perception measure," will work pretty well!
The one thing you shouldn't do, though, is get all excited about responses to specific items or differences among them. Pollsters will do that because they don't really have much of a clue about psychometrics.
Hmmm... maybe I'll do another post on "pollster" fallacies -- and how fixation on particular questions, variations in the responses between them, and fluctuations in them over time mislead people on public opinion on climate change.
No truism is nearly so elegant as, or responsible for more deep insights than, Bayes's Theorem.
The second is a graphic rendering of a particular Bayesian problem. I adapted it from an article by Spiegelhalter et al. in Science.
In my view, the "prior odds x likelihood ratio = posterior odds" rendering of Bayes is definitely the most intuitive and tractable. It's really hard to figure out what people who use other renderings are trying to do besides frustrate their audience or make them feel dumb, at least if they are communicating with those who aren't used to manipulating abstract mathematical formuale. As the graphic illustrates, the "odds" or "likelihood ratio" formalization, in addition to being simple, is the one that best fits with the heuristic of converting the elements of Bayes into natural frequencies, which is an empirically proven method for teaching anyone -- from elementary school children (or at least law students!) to national security intelligence analysts-- how to handle conditional probability.
If you don't get Bayes, it's not your fault. It's the fault of whoever was using it to communicate an idea to you.
Spiegelhalter, D., Pearson, M. & Short, I. Visualizing Uncertainty About the Future. Science 333, 1393-1400 (2011).
Wow—super great comments on the “Motivated consequentialist reading” post. Definitely worth checking out!
- MW & Jason Hahn question whether I’m right to read L&D as raising doubts about Haidt & Graham’s characterization of the dispositions, particularly the “liberal” one, that generate motivated reasoning of “harms” & like consequences.
- Peter Ditto offers a very generous and instructive response, in which indicates he thinks L&D is “perfectly consistent” with H&G but agrees that it “generally challenges” the equation of consequentialism with systematic reasoning in Greene’s distinctive & provocative dual-process theory of moral judgment.
- A diabolical genius calling himself “Nick” asks whether the “likelihood ratio” I assigned to L&D on the “asymmetry thesis” has been contaminated by my “priors.” I answer him in a separate post.
I am persuaded, based on MW’s, Jason’s, and Peter’s various points, that I was simply overeager in reading the L&D results as offering any particular reason to question H&G’s characterization of “liberals.” (BTW, the reason I keep using quotes for “liberals” is that I think people who self-identify as “liberals” on the 5- or 7-point “liberal-conservative” survey measure are only imperfect Liberals, philosophically speaking; the ones who self-identify as “conservatives,” moreover, are also imperfect Liberals—they aren’t even close enough to being anti-liberals to be characterized as “imperfect” versions of that; we are all Liberals, we are all small “r” republicans—here…)
The basis of my doubt is that I find it unpersuasive to suggest that intuitive perceptions of “harm” unconsciously motivate liberals or anyone else to formulate conscious, confabulatory “harm-avoidance” arguments. I don’t get this conceptually; if it’s intuitive perceptions of harm that drive the conscious moral reasoning of liberals about harm, where is the motivated reasoning? Where does confabulation come in? I also think the evidence is weak for the idea that perceptions of “harm” (or “unfairness,” for that matter) are what make liberals see “harm” (or “unfairness”) is what explains "liberals'" positions, at least on issues like climate change & gun control & the HPV vaccination. I think “liberals” are motivated to see “harm” by unconscious commitments to some cultural, and essentially anti-Liberal perfectionist morality. That is, they are the same as “conservatives” in this regard, except that the cultural understanding of “purity” that motivates "liberals" is different from the one that motivates “conservatives.”
But I concede, on reflection, that L&D don’t furnish any meaningful support for this view.
Here’s my consolation, however, for being publicly mistaken. Ditto directs me and others to the work of Kurt Gray, who Peter advises has advanced a more systematic version of the claim that everyone’s morality is “harm” based but also infused with motivated perceptions of one or another view of “purity” or the like (a position that would make Mary Douglas smile, or at least stop scowling for 10 or 15 seconds).
Well, as it turns out Gray himself wrote to me, too, off-line. He not only identified work that he & collaborators have done that engage H&G & also Greene in ways consistent with the position I am taking; he also was also very intent to furnish me with references to responses from scholars who take issue with him. So I plan to read up. And now you can too:
- Gray & Schein, Two Minds Vs. Two Philosophies: Mind Perception Defines Morality and Dissolves the Debate Between Deontology and Utilitarianism, Rev. Phil & Psych. (in press).
- Gray, K., Young, L. & Waytz, A. Mind Perception Is the Essence of Morality. Psychol Inq 23, 101-124 (2012).
There are some 16 responses to the latter –from the likes of Alicke; Ditto, Liu & Wojcik; Graham & Iyer; and Koleva & Haidt --in the Psychol. Inq. issue. Sadly, those, unlike the Gray papers, are pay-walled. :(
In a previous post, I acknowledged that a very excellent study by Liu & Ditto had some findings in it that were supportive of the “asymmetry thesis”—the idea that motivated reasoning and like processes more heavily skew the factual judgments of “conservatives” than “liberals.” Still, I said that “there's just [so] much more valid & compelling evidence in support of the 'symmetry' thesis—that ideologically motivated reasoning is uniform ... across ideologies—” that I saw no reason to “substantially revise my view of the likelihood” that the asymmetry position is actually correct.
An evil genius named Nick asks:
So what (~) likelihood ratio would you ascribe to this study for the hypothesis that the asymmetry thesis does not exist? And how can we be sure that you aren't using your prior to influence that assessment? ….
You acknowledge Liu & Ditto’s findings do support the asymmetry thesis, yet you state, without much explanation, that you “don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.”
… One way to think about it is that your LR for the Liu & Ditto study as it relates to the asymmetry hypothesis should be ~ equal to the LR from a person who is completely ignorant (in an E.T. Jaynes sense) about the Cultural Cognition findings that bear on the hypothesis. It is, of course, silly to think this way, and certainly no reader of this blog would be in this position, but such ignorance would provide an ‘unbiased’ estimate of the LR associated with the study. [note that is amendable to empirical testing.]
You may have simply have been stating that your prior on the asymmetry hypothesis is so low that the LR for this study does not change your posterior very much. That is perfectly coherent but I would still be interested in what’s happening to your LR (even if its effect on the posterior is trivial).
Well, of course, readers can’t be sure that my priors (1,000:1 that the “asymmetry thesis” is false) didn’t contaminate the likelihood ratio I assigned to L&D’s finding of asymmetry in their 2nd study (0.75; resulting in revised odds that "asymmetry thesis is false" = 750:1).
Worse still, I can’t.
Obviously, to avoid confirmation bias, I must make an assessment of the LR based on grounds unrelated to my priors. That’s clear enough—although it’s surprising how often people get this wrong when they characterize instances of motivated reasoning as “perfectly consistent with Bayesianism” since a person who attaches a low prior to some hypothesis can “rationally” discount evidence to the contrary. Folks: that way of thinking is confirmation bias--of the conscious variety.
The problem is that nothing in Bayes tells me how to determine the likelihood ratio to attach to the new evidence. I have to “feed” Bayes some independent assessment of how much more consistent the new evidence is with one hypothesis than another. ("How much more consistent,” formally speaking, is “how many times more likely." In assigning an LR of 0.75 to L&D, I’m saying that it is 1.33 x more consistent with “asymmetry” than “symmetry”; and of course, I’m just picking such a number arbitrarily—I’m using Bayes heuristically here and picking numbers that help to convey my attitude about the weight of the evidence in question).
So even if I think I am using independent criteria to assess the new information, how do I know that I’m not unconsciously selecting a likelihood ratio that reflects my priors (the sort of confirmation bias that psychology usually worries about)? The question would be even more pointed in this instance if I had assigned L&D a likelihood ratio of 1.0—equally consistent with asymmetry and symmetry—because then I wouldn’t have had to revise my prior estimation in the direction of crediting asymmetry a tad more. But maybe I’m still assigning an LR to the study (only that one small aspect of it, btw) that is not as substantially below 1.0 as I should because it would just be too devestating a blow to my self-esteem to give up the view that the asymmetry thesis is false.
Nick proposes that I go out and find someone who is utterly innocent of the entire "asymmetry" issue and ask her to think about all this and get back to me with her own LR so I can compare. Sure, that’s a nice idea in theory. But where is the person willing to do this? And if she doesn’t have any knowledge of this entire issue, why should I think she knows enough to make a reliable estimate of the LR?
To try to protect myself from confirmation bias—and I really really should try if I care about forming beliefs that fit the best available evidence—I follow a different procedure but one that has the same spirit as evil Nick’s.
I spell out my reasoning in some public place & try to entice other thoughtful and reflective people to tell me what they think. If they tell me they think my LR has been contaminated in that way, or simply respond in a way that suggests as much, then I have reason to worry—not only that I’m wrong but that I may be biased.
Obviously this strategy depends (among other things) on my being able to recognize thoughtful and reflective people being thoughtful and reflective even when they disagree with me. I think I can. Indeed, I make a point of trying to find thoughtful and reflective people with different priors all the time-- to be sure their judgment is not being influenced by confirmation bias when they assure me that my LR is “just right.”
Moreover, if I get people with a good enough mix of priors to weigh in, I can "simulate" the ideally "ignorant observer" that Nick conjures (that ignorant observer looks a lot like Maxwell's Demon, to me; the idea of doing Bayesian reasoning w/o priors would probably be a feat akin to violating the 2nd Law of Thermodynamics).
Nick the evil genius—and others who weighed in on the post to say I was wrong (not about this point but about another: whether L&D’s findings were at odds with Haidt & Graham’s account of the dispositions that motivate “liberals” and “conservatives”; I have relented and repented on that)—are helping me out in this respect!
But Nick points out that I didn’t say anything interesting about why I assigned such a modest LR to L&D on this particular point. That itself, I think, made him anxious enough to tell me that he was concerned that I might be suffering from confirmation bias. That makes me anxious.
So, thank you, evil Nick! I will say more. Not because I really feel impelled to tussle about how much weight to assign L&D on the asymmetry point; I think and suspect they agree that it would be nice simply to have more evidence that speaks more directly to the point. But now that Nick is helping me out, I do want to say enough so that he (and any other friendly person out there) can tell me if they think that my prior has snuck through and inserted itself into my LR.
In the study in question, L&D report that subjects' “deontological” positions—that is, the positions they held on nonconsequenialist moral grounds—tended to correlate with their view of the consequences of various disputed policies (viz., “forceful interrogation,” “condom promotion” to limit STDs, “capital punishment,” and “stem cell research”).
They also found that this correlation—this tendency to conclude that what one values intrinsically just happens to correlate with the course of action that will produce the state of affairs—increases as one becomes more “conservative” (although they also found that the correlation was still significant even for self-described “liberals”). In other words, on the policies in questions, liberals were more likely to hold positions that they were willing to concede might not have desirable consequences.
Well, that’s evidence, I agree, that is more consistent with the asymmetry thesis—that conservatives are more prone to motivated reasoning—than are liberals. But here's why I say it's not super strong evidence of that.
Imagine you and I are talking, Nick, and I say, "I think it is right to execute murderers, and in addition the death penalty deters." You say, "You know, I agree that the death penalty deters, but to me it is intrinsically wrong to execute people, so I’m against it.
I then say, "For crying out loud--let's talk about something else. I think torture can be useful in extracting information, & although it is not a good thing generally, it is morally permissible in extreme situations when there is reason to think it will save many lives. Agree?" You reply, "Nope. I do indeed accept that torture might be effective in extracting information but it's always wrong, no matter what, even in a case in which it would save an entire city or even a civilization from annihilation."
We go on like this through every single issue studied in the L&D study.
Now, if at that point, Nick, you say to me, "You know, you are a conservative & I’m a liberal, and based on our conversation, I'd have to say that conservatives are more prone than liberals to fit the facts to their ideology," I think I’m going to be a bit puzzled (and not just b/c of the small N).
"Didn’t you just agree with me on the facts of every policy we just discussed?" I ask. "I see we have different values; but given our agreement about the facts, what evidence is there even to suspect that my view of them is based on anything different from what your view is based on -- presumably the most defensible assessment of the evidence?"
But suppose you say to me instead, “Say, don't you find it puzzling that you never experience any sort of moral conflict -- that what's intrinsically 'good' or 'permissible' for you, ideologically speaking, always produces good consequences? Do you think it's possible that you might be fitting your empirical judgments to your values?" Then I think I might say, "well, that's possible, I suppose. Is there an experiment we can do to test this?"
I was thinking of experiments that do show that when I said, in my post, that the balance of the evidence is more in keeping w/ symmetry then asymmetry. Those experiments show that people who think the death penalty is intrinsically wrong tend to reject evidence that it deters -- just as people who think it's "right" tend to think that evidence it doesn't deter are unpersuasive. There are experiments, too, like the ones we've done ("Cultural Cognition of Scientific Consensus"; "They Saw a Protest"), in which we manipulate the valence of one and the same piece of evidence & find that people of opposing ideologies both opportunistically adjust the weight they assign that evidence. There are also many experiments connecting motivated reasoning to identity-protective cognition of all sorts (e.g, "They Saw a Game") -- and if identity-protective cognition is the source of ideologically motivated reasoning, too, it would be odd to find asymmetry.
So I think the L&D study-- an excellent study -- is relevant evidence & more consistent with asymmetry than symmetry. But it's not super strong evidence in that respect—and not strong enough to warrant “changing one’s mind” if one believes that the weight of the evidence otherwise is strongly in support of symmetry rather than asymmetry in motivated reasoning.
So tell, me, Dr. Nick—is my LR infected?
Do the the (dumbass) comments of (dumbass) Todd Aken supply evidence of the antagonism between conservative ideology and science? Predictably, it is being depicted as such all over the internet.
In truth, it's hard to believe that anyone who makes the mistake of treating a single individual's comments as evidence of anything (or who tries to entice others to make such a mistake) really understands (or is committed to) the disciplined form of observation and measurement that is the signature of science's way of knowing.
But if one wanted to try to explore in a defensibly empirical way how the general belief Aiken expressed might be entangled in a cultural identity, one might start by considering the considerable body of evidence that social scientists have collected about who believes what and why about both abortion and date rape. It's pretty interesting.
The Republican position on abortion, this evidence suggets, might be part of a war against women, but if so, it's a civil war. As Kristin Luker shows (through masterful ethnography; definitely counts as "empirical," in my book), women occupy front-line positions on both sides of this cultural conflict.
The social group most opposed to abortion, according to Luker's research, consists of women with traditional, hierarchical values. Within a hierarchical way of life, women acquire status by successfully occupying domestic roles. "Motherhood" as a selfless--or essentially self-abnegating--state of commitment to the welfare of one's children reflects the highest form of female virtue.
This understanding is threatened by an alternative, egalitarian (and individualistic) outlook that measures the status of women and men in a unitary currency--viz., their success in markets, professions, and other institutions of civil society. The concept of a "right to choose" or "right to abortion" is linked -- through social practices but also through cultural meanings -- to this alternative outlook, and its alternative conception of female virtue. Hierarchical women are the ones who have the most status to lose should this outlook become dominant. Thus, Luker concludes, they are the group most impelled to resist abortion rights.
The same, status-protective logic, a large literature in women's studies suggests, informs the position of hierarchical women in the "no means ...?" debate in rape law. A hierarchical way of life features norms that forbid women, in particular, from engaging in casual sex, or sex outside of marriage or committed relationships. "Token resistance" -- the initial feigning of a lack of consent by a woman who in fact desires sex-- is thought to be a form of strategic behavior engaged in by women who want to defy these norms while conveying to their partners that they can sill be expected to abide by hierarchical sexual mores generally (it's just that you are so irresistible!). Hierarchical men and women take a dim view of such behavior. But the ones who resent "token resistance" the most are hierarchical women--whose status is being misappropriated by women who are trying to conceal their own lack of virtue.
Women strongly committed to traditional, hierarchical gender norms are thus the most likely to believe that women who have acted contrary to traditional hierarchical norms--by, say, engaging in consensual sex outside of committed relationships on other occasions, or by wearing suggestive clothes, or by agreeing to be alone with a man in a room, or by drinking, etc.--really meant "yes" when they said "no." They are also the most quick, the women's studies literature suggests, to morally condemn such behavior.
These accounts are ones I've synthesized from various studies using sociological methods. But if they are right, we should expect these dynamics to generate motivated cognition. To protect their identities, women who subscribe to hierarchical norms should form factual perceptions that reflect the stake they have in opposing abortion and in conserving the law's attentiveness to "token resistance." We can test this conjecture by methods associated with social psychology.
CCP has in fact carried out studies with this goal. In one, we found that hierarchical, communitarian women were the group most disposed to see abortion as threatening to the health of women, a claim that is now one of the central justifications for a new generation of abortion restrictions in the U.S.
In another study, members of a large, diverse national sample reviewed facts from an actual rape case in which there was a dispute about whether a female college student who said "no" really meant it. Women with hierarchical values --particularly older ones -- were more likely than others to see the woman as "really" consenting despite her words. In addition to corroborating the women's studies position I described, this finding comports with the practical experience of attorneys who specialize in rape defense, and who report that the best juror in a "no means ... ?" case is likely to be a middle-aged woman with traditionalist outlooks (someone like Roy Black, who successfully defended William Kennedy Smith, wouldn't put it exactly this way; he wouldn't put it an any particular way--because he has professional situation sense, he'd just know it when he sees it.)
The cultural outlines of the dispute over "no means ...?" is very much at odds, though, with the prevailing view in legal scholarship, which depicts disputes about date rape as reflecting a conflict between men and women generally. In the study, there was no meaningful difference between men and women generally, considered apart from the interaction of cultural worldviews with gender that motivates hierarchical women to be particularly pro-defense in such date rape cases. Being a "liberal" or a "conservative," or a Democrat or Republican, also made no meaningful difference on their own.
So-- is there a connection between Aiken's comments and the culturally motivated cognition of facts relating to abortion and date rape?
Again, no one who takes a scientific view of the matter would try to draw from the sociological evidence I've described, and the sort of data CCP collected, an inference about what (if anything) was going on in Aken's brain.
But anyone who actually goes to the trouble of looking at relevant empirical evidence will find in it a plausible answer to how someone who forms and expresses beliefs like Aken's might fare pretty well in democratic politics. He is the beneficiary of the resentment and anxiety of
women who think that they have in some ways become less liberated in recent decades, not more; who think that easy abortion, easy birth control and a tawdry popular culture have degraded their stature, not elevated it. Though the women [at an Aiken rally a couple days ago] here were of varying faiths and economic backgrounds, they were white and bound by a shared unease with Obama in particular and liberals in general, who seemed so often to hold them in contempt.
With their support, Aken might still win. And if you really want to know why they'd support him, the answer is much more complicated, much more interesting, and in many ways much more troubling than some kind of antagonism between "conservatism" as a personality trait and science as a way of knowing.
Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).
Monson, C.M., Langhinrichsen-Rohling, J. & Binderup, T. Does "No" Really Mean "No" After You Say "Yes"? Attributions About Date and Marital Rape. Journal of Interpersonal Violence 15, 1156-1174 (2000).
Muehlenhard, C.L. & Hollabaugh, L.C. Do Women Sometimes Say No When They Mean Yes? The Prevalence and Correlates of Women's Token Resistance to Sex. Journal of Personality & Social Psychology 54, 872-879 (1988).
Sprecher, S., Hatfield, E., Cortese, A., Potapova, E. & Levitskaya, A. Token Resistance to Sexual Intercourse and Consent to Unwanted Sexual Intercourse: College Students' Dating Experiences in Three Countries. The Journal of Sex Research 31, 125-132 (1994).
Nice paper by Liu & Ditto just published (advance on-line) in Social Psychology and Personality Science ("What Dilemma? Moral Evaluation Shapes Factual Belief," doi: 10.1177/1948550612456045). It presents a series of studies-- from variants of the "trolley problem" to ones involving evidence on stem cell research--supporting the hypothesis that people will conform their assessments of an action or policy's consequences to their appraisals of its intrinsic moral worth.
As Liu & Ditto acknowledge, their findings are in keeping with those of other researchers who have been studying the influence of culturally or ideologically motivated cognition. The design of their studies, however, was specifically geared to detecting how readily disposed their subjects were to resort to consequentialist justifications for nonconsequentialist positions. In one cool experiment, e.g., they found that exposure to compelling nonconsequenitialist arguments generated changes in the perceived deterrent efficacy of capital punishment!
This feature of their paper enables the motivated-reasoning position to square off directly against two other important positoins in contemporary moral psychology.
The first, associated most conspicuously with Jonathan Haidt, is that ideological or partisan conflicts over policy reflect a fundamental difference in "liberal" and "conservative" moral styles. Conservatives, Haidt argues, focus on nonconsequentialist evaluations of "purity" or "sanctity," whereas liberals focus on "harm."
But as Liu & Ditto note, conservatives, every bit if not more than liberals (more on that in a second!) adopt a default utilitarian perspective. What divides contemporary American who identify as "liberals" and "conservatives" is not the normative authority of Mill's "harm" principle. It's a set of disputed factual claims that about whether forms of behavior symbolically associated with one or the other's cultural style causes harms of the sort that any Millian Liberal would agree warrant legal redress.
That people are impelled to impute harm to behavior that denigrates their cultural norms is, of course, the nerve of Mary Douglas's work, in particular Purity and Danger. I very much agree with Douglas's view. Indeed, I think the view that public policy debate can be characterized as one between philosophical Liberals and Antiliberals -- i.e., between those who believe that law should be confined to promotion of secular ends and those who believe that law is also a proper instrument for propogating a moral orthodoxy -- is one only those who spend far too much time in university moral philosophy seminars are likely to form.
The second position with which Liu & Ditto join issue is the dual process theory of moral psychology. I view Josh Greene as the leading exponent of this perspective. Greene is a subtle thinker; like Haidt, he is both a first-rate philosopher and an amazing psychologist, But he has not been shy about equating nonconsequentialist (or "deontological") reasoning with emotion-driven, unconscious "system 1" (in Kahneman's terms) reasoning and consequentialism with conscious, reflective "system 2."
I don't buy it. Indeed, cultural cognition -- the tendency of people to fit their assessments of risk and related facts to their group values -- is all about the distorting force that motivated reasoning exerts over consequentialist judgments. Greene depeicts "deontological" reasoning as a form of confabulation. But precisely because consequentiaist frameworks so often rest on contentious behavioral conjectures and contested forms of empirical proof, they furnish a notoriously pliable set of resources for those who feel impelled to reason, as opposed to intuit, their way out of policy conclusions they find ideologically noncongenial.
If anything, it seems like those who are adept at system 2 reasoning will be more vulnerable to motivated cognition. They will be better than those who are less reflective, more intuitive, in manipulating the various bendable empirical bits and pieces out of which utilitarian argument tend to be formed. This was the premise of our Nature Climate Change study, which presented evidence that greater science comprehension magnifies cultural cognition.
But like any other proposition (or any proposition worth discussing), the claim that consequentialist reasoning is more hospitable to motivated cognition than other sorts is open to empirical testing. I count Liu & Ditto's studies as evidence in support of that conclusion.
Now, there is one other issue to discuss.
As I said, Liu & Ditto find that conservatives, as well as liberals, resort to consequentialist reasoning. Conservatives don't naturally frame their position in nonconsequentialist terms, much less confine themselves to such justifications. Indeed, in one of the studies they feature in their paper, Liu & Ditto observe "the tendency to perceive morally distasteful acts as also being practically disadvantageous was significantly more pronounced ... for political conservatives."
So this raises the perennial (for me, in this blog; I am getting treatment, but still can't shake my obsession) issue of the "asymmetry thesis"-- the claim (ably advanced in Chris Mooney's Republican Brain) that motivated consequentialist reasoning is more characteristic of conservatives than liberals. Is the Liu & Ditto paper evidence in "favor" of the asymmetry thesis?
Sure. In fact, in one of their studies Liu & Ditto present a statistical analysis that shows that subjects' tendency to adopt empirical positions supportive of their intrinsic moral assessments increased as subjects became more conservative. As I've noted before, proponents of the "asymmetry thesis" usually don't try to assess whether any differernces observed in the force of motivated reasoning across the ideological spectrum (or cultural spectra) is statistically, much less practically, significant. Liu & Ditto did make such an assessment.
But does that mean the asymmetry thesis is "true" after all?
It's a mistake (a sadly common one) to view scientific studies as "proving" or "disproving" claims in some binary fashion. Valid studies supply evidence that gives us more reason than we otherwise would have had to credit one hypothesis relative to some alternative one. If one wants to form a provisional judgment--and all judgments must always be viewed as provisional if one is taking a scientific attitude toward empirical proof--then one has to aggregate all the available pieces of evidence, assigning to each the weight it is due in light of how much more consistent it is one with hypothesis than another.
There's just much more valid & compelling evidence in support of the "symmetry" thesis -- that ideologically motivated reasoning is uniform, for all practical purposes, across ideologies--than there is in support of the "asymmetry" position. I myself don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.
Indeed, I don't think Liu and Ditto themselves view their results as particularly strong proof in favor of the asymmetry thesis. They note that the "associations between moral and factual beliefs" they observed--on issues like the death penalty, promotion of condoms to fight STDs, stem cell research, and forceful interrogations--" were stronger for conservatives but "still significant for ... political liberals." "[W]hile our political psychology results can be taken as consistent with the body of work associating conservatism with heuristic and motivated thinking," they conclude, "it is important to also note the modest size of these interaction effects and that significant moral-factual coordination was found across the political spectrum."
The paper is not a "show stopper" on the "asymmetry" question. On the contrary, it is, in this respect like the others, something much better than that: a pertinent, informative, and indeed elegant addition to an ongoing scholarly conversation.
Gave a talk last Friday in Washington DC for members of the US Global Change Research Program. The statutory mandate of USGCRP, an inter-agency office within the Executive Branch, is to supervise "a comprehensive and integrated United States research program which will assist the Nation and the world to understand, assess, predict, and respond to human-induced and natural processes of global change."
I was one of a series of researchers who have been invited to make presentations on the science of science communication (SSC) to USGCRP. It's heartening to see policymakers taking steps to integrate SSC into science-informed decisionmaking. This is exactly the sort of development that the National Academy of Sciences has been trying to promote with the efforts that culminated in its Science of Science Communication colloquium last spring. Of course, it was also a personal honor to me to be one of the researchers consulted by USGCRP, as it was to be one of those invited to participate in the NAS symposium.
In my talk to USGCRP, I stressed three points:
1. What the problem isn't, and what it really is. The first was the need to conceive of the controversy that surrounds climate change (and a number of other risk issues) as rooted not in generic constraints on human rationality but rather in the species of motivated reasoning associated with cultural cognition. Ordinary members of the public react to issues of disputed fact in the national climate debate in much the say that sports fans do to disputed officiating calls, and people who are high in cognitive reasoning ability do it all the more aggressively.
2. Go local. The second point concerned the value in exploiting local decisionmaking settings as venues for promoting open-minded engagement with scientific evidence relating to climate change. When members of a community address issues of climate-change adaptation -- ones relating to rising sea levels, increased incidence of hurricanes, and depletion of water and other natural resources -- their decisionmaking is much more consequential for their individual lives. They also talk with others (neighbors, local businesses, regional utilities and like providers) who are comparably situated, whom they know and are comfortable with, and with whom they share a common idiom.
For these reasons, the group rivalries that fuel culturally motivated reasoning when "climate change" is framed as a national issue tend to dissipate. At the local level, people are more likely to see themselves as members of the same team.
Evidence for this phenomenon consists in the rich array of state-sponsored local adaption initiatives going on in places like Florida, Arizona, and California. The need for informed science communication strategies to guide such initiatives to constructive outcomes and steer them away from nonconstructive, conflictual ones is reflected in the contrasting experiences of Virginia (good) and North Carolina (bad) in addressing how to assess the potential impact of rising sea levels on their states.
3. Use genuinely evidence-based communication strategies. The aim of SSC, of course, is to harness empirical observation and measurement to promote our collective interest in policies informed by the best available scientific evidence. But it's a mistake to think empirical observation and measurement end in social scientists' labs.
Surveys and stylized lab experiments are distinctively suited for identifying the general mechanisms that shape cognitive engagement with policy-relevant science. But they rarely generate meaningful, determinate instructions on what to say, to whom, and how. (Social scientists who don't acknowledge this risk lapsing into story-telling.)
Translating SSC insights of that type into useable guides for action is something that will happen in the field--at the site of actual communication. But there too the process must be evidence based. As field communicators use their judgment to adapt their efforts to the insights generated in surveys and lab experiments, they must employ the same forms of disciplined observation and measurement, both so they can calibrate their efforts to achieve maximum effect and so that the evidence their efforts generate isn't wasted but instead preserved and added to the growing stock of information available on what works, what doesn't, and why.
So go local. And bring your empirical-study toolkit with you!
I know my USGCRP talk, which was "open" to the public by telephone link and by internet simulcast of my slides, was also recorded. Maybe it will be put on-line at some point.
But for now, here are slides.
Last February I got to be part of a great panel discussion at the annual Ocean Sciences Meeting with science communication scholar Max Boykoff, science journalist Richard Harris from NPR, and oceanography scientist Jonathan Sharp. Now the event is available for viewing on the internet. The video reflects super high production values, too!
Honest, constructive & ethically approved response template for science communication researchers replying to "what do I do?" inquiries from science communicators
Dear [fill in name]:
I'd be happy to discuss this [select one: super interesting; interesting; particular] issue with you. I have to warn you, though, that I won't be able to offer you a set of "how to" instructions or guidelines about what you should say, how, or to whom.
As a matter of principal, I won't give that sort of "do's & don't's" advice to you or any other real-world communicator, b/c I think those who use empirical methods to study the general dynamics of science communication shouldn't mislead anyone about the nature of their insights. Study aimed at identifying general mechanisms of science communication utilize surveys & lab experiments. Those forms of study involve deliberately stripped down models that abstract from the cacophony of real-world influences that necessarily confound observation and measurement and compromise control of the particular influences of interest to the researcher.
This method is extremely valuable. It is what warrants that the insights such studies generate about mechanisms of consequence to real-world communication are real and can be relied on. The number of conjectures about how science communication works that are plausible far exceeds the number that are actually true. Pristine models are the best method for plucking the latter out of the vast sea of the former and thus for steering the discipline of science communication toward profitable roads of engagement and away from alluring dead-ends.
Nevertheless, precisely because this method demands abstracting from the particulars of real-world communication settings, it won't produce determinate and meaningfully specific prescriptions for any real-world communication problem.
Full realization of utility of this critical research thus depends on field studies that test informed conjectures about how the general mechanisms identified in lab experiments and surveys can be brought to bear on particular communication problems. Design of those types of field studies, in turn, demands the participation of individuals like you, who have situation-specific knowledge relating to the field-communication task at hand.
Social scientists who specialize in acquiring general knowledge of the mechanisms of cognition that shape science communication can play a vital role in field research too because they know what is required for valid observation and measurement of the results that such studies will produce. But for them to carry on as if the bridge of intelligent field study was unnecessary to connect the mechanisms they have observed in lab experiments and surveys to realistic, concrete, meaningful prescriptions about what to do in particular situations will at best only delay the necessary work that needs to be done, and at worst degrade their findings by making them the fodder of just-so stories--one of the signal abuses of decision science.
Bottom line, then, is that I'm happy to help you think about designing field studies informed by established mechanisms of science communication, or at least making the communication efforts you are already engaged in amenable to empirical observations & measurements. In fact, to be perfect candid, the possibility of helping you design them & then collecting data that could be shared with others is also something that I am likely to try to sneak into our discussion.
Would this sort of advice be useful to you? If so, perhaps we could talk [select one: right this second; later today; at your earliest convenience].
[fill in name]