follow CCP

Recent blog entries
Saturday
Oct202012

Outline of position on (attitude about) how to improve policy-supportive science communication 

Had a conversation w/ a really smart scholarly friend who shares my basic orientation toward science communication & who is doing cool things to advance it. For his benefit, after we were done I reduced my thoughts to a small annotated outline. Figured I might as well put the memo up on the blog. It's the internet equivalent, I suppose, of a guy on a desert island putting a message in a bottle & tossing it into the ocean--the nice thing being that there are *so many* other islands out there on the net that the hope the bottle will end up washing onto the shore of someone who finds its contents useful is not nearly so farfetched or desperate!

0.  Polarization does not stem from a deficit in the public's comprehension of
     science 
(or the exploitation of any such deficit by self-interested actors)

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Misinformation and climate change conflict

1. On how to make sense of cultural cognition, science comprehension, and cultural
    polarization:

The problem isn’t the mode of comprehending science; it’s the contamination of the “science communication environment” in which cultural cognition (or like mechanisms) can be expected to & usually do reliably lead diverse, ordinary people to converge on best science. The contamination consists in the attachment of antagonistic cultural meanings to facts that admit of scientific investigation.

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Nullius in verba? Surely you are joking, Mr. Hooke! (or Why cultural cognition is not a bias, part 1) 

The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)

2. On what to do                                                                                                        

a. Protect science communication environment: We need to perfect the knowledge we have for forecasting potential contamination—on, say, novel issues like nanotechnology, synbio, or GMOs—and implement procedures (say, govt review of “science communication impact” of govt-funded science research & of regulatory decisionmaking) to use that knowledge to preempt such contamination.

The science of science communication: an environmental protection conception (Lecture at National Academy of Sciences Sackler Colloquium, May 22, 2012)

b.  Decontaminate already polluted environments: Hard to do but not impossible. Involves figuring out how through conscious reorientation of meaning cues—identity of advocates, narrative frames for conveying info, etc.—so that toxic associations get broken down.

Kahan D.M., Jenkins-Smith, J., Tarantola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92 (Jan. 9, 2012).

c.  Select policy/engagement locations in manner that exploits relative quality of scicom environments. The cues that determine what issues mean are highly sensitive to context, including what the policy question is, who is involved in the discussion, & where it is occurring. If one context is bad, then see if you can find another.

E.g., climate: The national-level “mitigation” discussion is highly polluted; the local, adaptation focused one is not.

The "local-adaptation science communication environment": the precarious opportunity

Go local, and bring empirical toolkit: Presentation to US Global Change Research Program

3. How to do it: scientifically

We have knowledge on these dynamics.  So just guessing what will work to promote constructive, nonpolarized public engagement with scientific information—without looking at & trying to make informed conjectures based on that knowledge—is a huge mistake (an ironic one, too, since it is an utterly unscientific way to do things).

An even bigger mistake is to do scicom w/o collecting information. Disciplined observation & measurement can be used to calibrate & improve knowledge-informed strategies as a communication effort (say, an attempt to build support for sensible use of climate science in an adaptation setting) unfolds. But just as important, the collection of information generated by these means is critical to extending practical knowledge of how to do effective communication in field settings. What’s learned every time people engage in scientifically informed science communication is more information that can be used to help improve the conducting of such activity in the future.

Thus, people who engage in policy-supportive science communication efforts w/o systematic information collection protocols – including ones that test effectiveness of their methods in promoting open-minded enagement—are casually dissipating & wasting a knowledge resource of tremendous value. They are in fact unwittingly aiding & abetting entropy--an act of treason in the Liberal Republic of Science!

Wild wild horses couldn't drag me away: four "principles" for science communication and policymaking 

Honest, constructive & ethically approved response template for science communication researchers replying to "what do I do?" inquiries from science communicators

 

Wednesday
Oct172012

Wanna see more data? Just ask! Episode 1: another helping of GM food

Okay, here's a new feature for the blog: "Wanna see more data? Just ask!"  

The way it works is that if anyone sees interesting data in one of my posts, or in any of our studies (assuming it was one I worked on; for others, I'll pass on requests but don't necessarily expect an answer; some of my colleagues have actual lives), and has some interesting question that could be addressed by additional analyses, that person can post a request (in comments section or by email to me) & I'll do the analyses and post the results.

Now notice I said the question has to be "interesting." Whether it meets that standard is something I'll decide, using personal judgement, etc. But here are some general, overlapping, related criteria:

1.  The request has to be motivated by some conjecture or question.  Basically, you have have some sort of theoretically grounded hypothesis in mind that can be tested by the analysis you'd like to see. The most obvious candidate would be a conjecture/question/hypothesis that's in the sprit of a plausible alternative explanation for whatever conclusion it was that I reached (or the study did) in presenting the data in the first place. But in any case, give some indication (can be brief; should be!) of what the question/hypothesis/conjecture that you are curious about is & why. 

2. Tell me how I can do the analysis and why doing it that way can be expected to generate some result that gives us more reason to accept a particular answer to the motivating question, or more rason to accept or reject the motivating hypothesis, than we would have had without the analysis.  The "how to do" part obviously will be constrained by what sorts of variables are in the dataset. Usually we have lots of demographic data as well as our cultural outlook variables and so forth. The "why" question requires specifying the nature of the causal inference that you think can be drawn from the analysis.  It's gotta make sense to be interesting.

3. No friggin' fishin trips! Don't ask me to correlate global warming with the price of cottage cheese just because you think that would be an amusing thing to do.

4. Don't even think of asking me to plug every conceivable variable into the right-hand side of a regression and see what sort of gibberish pops out. Of course, I'm happy to do multivariate analyses, but each variable has to be justified as part of a model that relates in a specifiable way to the interesting conjecture motivating the request and to the nature of the inference that can be drawn from the analysis. Or to put it another way, the analysis has to reflect a cogent modelling strategy. Overspecified regression analyses are usually a signature of the lack of a hypothesis -- people just see what turns out to be significant (something always will with enough variables) & then construct a post-hoc, just-so story for the result. In addition, the coefficients for overspecified models are often meaningless phantoms-- the impact of influences "holding constant" influences that in the real world are never "constant" in relation to those influences.... I'll write another post on why "over-controlling" is such a pernicious, mindless practice....

Okay. This first installment is responsive to questions posed in response to "part 3" of the GM food risk series. Disccusants there were curious about whether the "middling" mean score for the GM food risk item was best understood as "not sure; huh?," as I proposed, or as a genuine, mid-level of concern. One suggested seeing some more raw data might help, and on reflection I can think of some ways to look at them that might, at least a bit.

Consider these histograms, which reflect the distribution of responses to the 8-point industrial-strength risk perception item for "Global warming" (left) and "Genetically modified foods" (right):

Here are some things to note. First, GM food distribution is much more "normal" -- bell shaped -- than the global warming distribution. Indeed, if you compare the black line -- the statistical "normal density distribution" given the mean & SD for the global warming data --with the red one -- the kernel density plot, which "fits" a locally weighted regression to the data-- you can see that the distribution for global warming risk perceptions is closer to bimodal, meaning that the subjects are actually pretty divided between those who see "low risk" and those who see "high."  There's not so much division for GM foods.

Second, the GM foods distribution has a kind of a fat mid-point (low kurtosis). That's because a lot of survey respondents picked "3," "4," & "5." Because an excess of "middle choices" is a signature of "umm, not sure" for risk perception measures of this sort, I am now even more persuaded that the 800 members of this nationally representative sample didn't really have strong views about GM foods in relation to the other risks, all of which were ones that displayed substantial cultural polarization.

But my confidence in this conclusion is only modest.  The cases in which a middling mean signifies generalized "don't know" often have much more dramatic concentrations of responses toward the middle of the scale (high kurtosis); indeed, the labels that were assigned to each point on the likert item risk-perception measure were designed to mitigate the middle/don't-know effect, which is usually associated with scales that ask respondents to estimate a probability for some contingency (in which case people who don't know mean to convey that with "50%.").

Now consider these two figures:

These are the kernel density estimates for responses to these two risk-perception items when the sample is split at the mean of the "individualism-communitarianism" scale. Basically, the figures allow us to compare how "individualists" and "communitarians" are divied on global warming (left) and GM foods (right).

Do you see what I do? The individualists and communitarians are starkly divided on climate change: the latter is skewed strongly toward high risk, and the former toward low (although perhaps a bit less so; if I looked at "hierarch individualists," you'd really see skewing). That division (which, again, is compounded when the hierarchical disposition of the subjects is taken into account as well) is the source of the semi-bimodal distribution of responses to the global warming item. 

Now look at individualists & communitarians on GM foods. They see more or less eye-to-eye. This is corroboration of my conclusion in the last post that there isn't, at least not yet, any meaningful cultural division over GM foods. (BTW, the pictures would look the same if I had divided the subjects into "hierarchs" and "egalitarians"; I picked one of the two worldview dimensions for the sake of convenience and clarity).

Whaddya think? Wanna see some more? Just ask!

 Reference

de Bruin, W.B., Fischhoff, B., Millstein, S.G. & Halpern-Felsher, B.L. Verbal and Numerical Expressions of Probability: “It's a Fifty–Fifty Chance”. Organizational Behav. & Human Decision Processes 81, 115-131 (2000).

 

Monday
Oct152012

Timely resistance to pollution of the science communication environment: Genetically modified foods in the US, part 3

Okay: some data already!

As explained in parts one & two of this series, I’ve been interested in the intensifying campaign to raise the public profile of—and raise the state of public concern over—GM foods in the US.

That campaign, in my view, reflects a calculated effort to infuse the issue of GM-food risks with the same types of antagonistic meanings that have generated persistent states of cultural polarization on issues like climate change, nuclear power, the HPV vaccine, and gun control.  To me, that counts as pollution of the science communication environment, because it creates conditions that systematically disable the faculty that culturally diverse citizens use (ordinarily with great reliability) to figure out what is known to science.

But as I commented in the last post, the campaign has provoked articulate and spirited resistance from professional science communicators in the media. I view that as an extremely heartening development, because it furnishes us with what amounts to a model of how professional norms might contribute to protecting the science communication environment from toxic cultural meanings. Democratic societies need both scientific insight into how the science communication environment works and institutional mechanisms for protecting it if they are to make effective use of the immense knowledge at their disposal for advancing their citizens' common welfare.

But where exactly do things stand now in the US?  Historically, at least, the issue of GM-food risks has aroused much less attention, much less concern, than it has in Europe. That could change as a result of culturally partisan communications of the sort we are now observing, but has it changed yet or even started to?

John Timmer, the science editor for Ars Technica, actually posed more or less this question to me in a twitter exchange, asking whether there really is “anything like” the sort of cultural conflict toward GM foods risks that we see toward climate-change risks in this country. Questions like that deserve data-informed answers.

So here’s some data from a recent (end of September) survey. The sample was a nationally representative one of 800 individuals. One part of the survey asked them to rank on a scale of 0-7 “how serious” they viewed a diverse set of risks (I call this the “industrial strength risk perception measure”). 

The question, essentially, is whether GM foods are at risk of acquiring the sorts of cultural meanings that divide “hierarchical individualists” and “egalitarian communitarians” on various issues. Accordingly, I have constructed statistical models that permit us to see not only how GM-food risks rank in relation to others for the American population as a whole but also whether and strongly GM-food risks divide those two segments of the population.

 

There are a number of things one could say here.

One is—holy smokes, the US public is apparently more worried about GM-food risks than they are about global warming, nuclear power, and guns! The “average American” would assign a ranking of 4.3 to GM foods (just above “moderately risky”) but only 3.9 for global warming (just below), 4.0 (spot on) for nuclear, and 2.9 (between “low” and “moderate”) for guns.

But that wouldn’t be the way I’d read these results. First of all, while it’s true that GM foods are apparently more scary for the “average” American than guns, nuclear power, and climate change, the striking thing is just how unconcerned that “person” is with any of those risks. “High rates of taxation for businesses” are apparently much more worrisome for the "mean" member of the American population than the earth overheating or people being shot. Given how unconcerned this guy/gal is with all these other risks, should we get all that excited that he/she is a bit more more concerned about GM foods?

Notice too that the "mean" member of the population isn't as concerned with GM foods as with high business tax rates (4.5)—or as illegal immigration (4.7) or government spending (5.3)? What to make of that?...

But second and more important, look at the cultural variance on these risks.  Global warming turns out to be the most serious risk for egalitarian communitarians. Indeed, that group sees nuclear power as much riskier, too, than either business tax rates, illegal immigration, or “government spending,” which are about as scary for that group as gun risks.  Hierarchical individualists have diametrically opposed perceptions of the dangers posed by all of these particular risk sources.

Bear in mind, hierarch individualists and egalitarian communitarians aren’t rare, or unusual people. They are pretty recognizable in lots of respects—including their political affiliations, which amount to “Independent leans Republican” and “Independent leans Democrat,” respectively.

Given this, it’s not clear that it makes much sense to assign meaning to the “average” or “population mean” scores on these risks. Because real people have particular rather than "mean" cultural outlooks, we should ask not how the "average" person perceives culturally contested risks, but how someone like this see those risks as opposed to someone like that?

Yet note, the risks posed by GM foods are not culturally contested. We are all, in effect, "average" there.  Moreover, for both cultural hierarchical individualists and egalitarian communitarians, GM-food risks are in the “middle” of the range of risk sources they evaluated.

So what I’d say, first, is that there is definitely no cultural conflict for GM foods in the US—at least not of the sort that we see for climate change, nuclear power, guns, etc.

Second, I’d say that I don’t think there’s very much concern about GM foods generally. The “middling” score likely just means that members of the sample didn’t feel nearly as strongly about GM foods as they felt—one way or the other—about the other risks. So they assigned a middling rating.

But third, and most important, I’d say that this is exactly the time to be worried about cultural polarization over GM foods.

As I said at the outset of this series, putative risk sources aren’t borne with antagonistic cultural meanings. They acquire them.

But once they have them, they are very very very hard to get rid of. 

In both parts, I likened culturally antagonistic meanings to “pollution” of the “science communication environment.”  Given how hard it is to change cultural meanings, it’s got to be a lot easier and more effective to keep that sort of contamination out—to deflect antagonistic meanings away from novel technologies or ones that otherwise haven’t acquired such resonances—than it is to “clean it up” once an issue has become statured with such meanings.

Consider the debate over climate change, which is highly resistant to simple “reframings” strategies. Perhaps it would have worked to have put Nancy Pelosi and Newt Gingrich on a couch together before 2006. But today, the simple recommendation “use ideologically diverse messengers!” is not particularly helpful.

So I believe the data-informed answer to John Timmer's question is, no, GM foods don't provoke anything like the sort of antagonistic meanings that climate change expresses.

And for that reason, I'd argue, the efforts of reflective science journalists and others to resist the release of such contaminants into the science communication environment is as timely as it is commendable.

Part one in this series.

Part two.

Sunday
Oct142012

Resisting (watching) pollution of the science communication environment in real time: genetically modified foods in the US, part 2

Just as the health of individual human beings depends on the quality of natural environment, the well-being of a democratic society depends on the quality of the science communication environment.

The science communication environment is the sum total of cues, influences, and processes that ordinary members of the public rely on to participate in the collective knowledge society enjoys by virtue of science.

No one (not even scientists) can personally comprehend nearly as much of what is known to science as it makes sense for them—as consumers, as health-care recipients, as democratic citizens—to accept as known by science. To participate in that knowledge, then, they must accurately identify who knows what about what.

When the science communication environment is in good working order, even people who have only rudimentary understandings of science will be able to make judgments of that kind with remarkable accuracy. When it is not, even citizens with high levels of scientific knowledge will be disabled from reliably identifying who knows what about what, and will thus form conflicting perceptions of what is known by science—to their individual and collective detriment.

Among the most toxic threats to the quality of a society’s science communication environment are antagonistic cultural meanings: emotional resonances that become attached to risks or other policy-relevant facts and that selectively affirm and denigrate the commitments of opposing cultural groups.

Ordinary individuals are accustomed to exercising the faculties required to determine who knows what about what within such groups, whose members, by virtue of their common outlooks and experiences, interact comfortably with one another and share information without misunderstanding or conflict. Because antagonistic cultural meanings create strong psychic pressures for members of opposing groups to form and persist in conflicting sets of factual beliefs, such resonances enfeeble the reliable functioning of the faculties ordinary people (including highly science literate ones) use to participate in what is known by science.

Antagonistic cultural meanings are thus a form of pollution in the science communication environment. Their propagation accounts for myriad divisive and counterproductive policy conflicts—including ones over climate change, nuclear power, and private gun ownership.

In part one of this series, I described the complex of economic and political forces that have infused the issue of genetically modified (GM) foods with culturally antagonistic meanings in Europe.

I also noted the signs, including the campaign behind the pending GM food-labeling referendum in California, that suggest the potential spread of this contaminant to the US science-communication environment.

What makes the campaign a pollutant in this regard has nothing to do with whether GM foods are in fact a health hazard (there’s a wealth of scientific data on that; readers who are interested in them should check out David Tribe’s blog). Rather, it has to do with the deliberate use of narrative-framing devices—stock characters, dramatic themes, allusions to already familiar conflicts, and the like—calculated to tap into exactly the culturally inflected resonances that pervade climate change, nuclear power, guns, and various other issues that polarize more egalitarian and communitarian citizens, on the one hand, and more hierarchical and individualistic ones, on the other.

But as I adverted to, there is at least one countervailing influence that didn’t exist in Europe before it became a site of political controversy over GM foods but that does exist today in the US: consciousness of the way in which dynamics such as these can distort constructive democratic engagement with valid science, and a strong degree of resolve on the part of many science communicators to counteract them.

Science commentators like Keith Kloor and David Ropeik, e.g., have conspicuously criticized the propagation of what they view as unsubstantiated claims about GM Food health risks.

Both of these writers have been outspoken in criticizing ungrounded attacks on the validity of science on climate change, too. Indeed, Kloor recently blasted GM food opponents as the “climate skeptics of the Left.

Precisely because they have conspicuously criticized distortions of science aimed at discounting environmental risks in the past, their denunciation of those whom they see as distorting science to exaggerate environmental risks here reduces the likelihood that GM foods risks will become culturally branded.

Science journalists, too, have been quick to respond to what they see as the complicity of their own in participating in dissemination of questionable science claims on GM foods.

In one still-simmering controversy, a large number of journalists accepted an offer of advance access to an alarming study on GM-food risks in return for refraining from seeking the opinion of other scientists before publishing their “scoop” stories. Timed for release in conjunction with a popular book and a TV documentary, the study, conducted by a scientists with a high profile as supporter of GM-food regulation, was in fact thereafter dismissed as non-credible by expert toxicologists—although not before the alarming headlines were seized on by proponents of the California labeling proposition as well as European regulators.

Writing about the controversy, NY Times writer Carl Zimmer blasted the affair as a “rancid, corrupt way to report about science.” It was clear to the participating reporters, Zimmer observed, that the authors of the study were seeking to exclude any critical appraisal from the initial burst of attention” in the media, thereby “reinforcing opposition to genetically modified foods.” “We need to live up to our principles, and we need to do a better job of calling out bad behavior…. [Y]ou all agreed to do bad journalism, just to get your hands on a paper. For shame.”

Ars Technica editor John Timmer amplified Zimmer’s response. “Very little of the public gets their information directly from scientists or the publications they write,” Timmer pointed out. “Instead, most of us rely on accounts in the media, which means reporters play a key role in highlighting and filtering science for the public.” In this case, Timmer objected, “the press wasn't reporting about science at all. It was simply being used as a tool for political ends.”

One reason to be impressed by these sorts of reactions to GM foods is that they suggest the possibility of using professional norms as a more general device for protecting the quality of the science communication environment.

As I indicated in my last post, there is nothing inevitable about the process by which a risk issue becomes suffused with antagonistic cultural meanings.  Those kinds of toxic associations are made, not born.

It follows that we should make protection of the science communication environment a matter of self-conscious study and self-conscious action.  The natural environment cannot be expected to protect itself from pollution itself without scientifically informed action on our part. And the same goes for the quality of the science communication environment.

I’m of the view that the sorts of collective action that protection of the science communication environment requires will have to come from various sources, including government, universities, and NGOs.

But clearly one of the sources will have to be professional science communicators. Timmer is clearly right about the critical role it plays—not just in translating what’s known by science into terms that enable curious people to experience the thrill of sharing in the wondrous insights acquired through our collective intelligence (I myself am so so grateful to them for that!), but in certifying who knows what about what so that as democratic citizens people can reliably gain access to the knowledge they need to contribute to intelligent collective decisionmaking.

Animated by diverse motivations—commercial and ideological—actors intent on disabling the faculty culturally diverse citizens use to discern who knows what about what can thus be expected to strategically target the media. Strong professional norms are a kind of warning system that can help science journalists recognize and repel efforts to use them as instruments for polluting the science communication environment.

Unlike centrally administered rules or codes, norms operate as informal, spontaneous guides for collective behavior. They get their force from internalized emotional commitments both to abide by shared standards of conduct and to contribute to enforcement of them by censure and condemnation of violators. Norms are propogated as members of a community observe examples of behavior that express those commitments and see others responding with admiration and reciprocation. That all seems to be happening here. 

This unusual opportunity to watch an attempt to inject a new toxic meaning into the science communication environment also furnishes a unique opportunity to learn something about who can protect that environment from pollution and how.

Oh!  I said I would share some data on cultural perceptions of GM food risks in the US in this installment of the series. But don’t you agree that I’ve already gone on more than long enough? So I’ll just have to present the data next time—in the third, and I promise final, post in this (now) 3-part (I actually imagined only one when I started) series.  (But here's a sneak preview.)

Part one in this series.

Part three.

Friday
Oct122012

Watching (resisting) pollution of the science communication environment in real time: genetically modified foods in the US, part 1

Putative risk sources are not born with culturally divisive meanings. They acquire them.

Something happens—as a result perhaps of strategic manipulation but also possibly of accident or misadventure—that imbues some technology with resonances that selectively affirm and denigrate the outlooks of opposing groups. At that point, how to regard the risks the technology poses—not only what to do to ameliorate them, but whether even to acknowledge them as real—becomes a predicable occasion for disagreement between the groups’ respective members.

By highlighting the association between competing positions and competing cultural identities, such disagreement itself sharpens the antagonistic meanings that the technology bears—thereby magnifying its potential to generate conflict. And so forth & so on.

But the thing that imbues a technology (or a form of behavior or a type of policy) with culturally antagonistic meanings doesn’t have to happen. There’s nothing necessary or inevitable about this process.

It’s this contingency that explains why one putative risk source (say, the HPV vaccine) can provoke intense cultural conflict while another, seemingly equivalent source (say, the H1N1 vaccine) doesn’t. It explains, too, why one and the same risk (nuclear power, e.g., or “mad cow disease”) can provoke division in one society but indifference in another, seemingly similar society.

Consider genetically modified (GM) foods. Historically, GM foods—e.g., soybeans altered to resist diseases that otherwise would be controlled by chemical pesticides, or potatoes engineered to withstand early frosts—have not provoked nearly as much concern in the US as in Europe.  Such products can be found in upwards of 70% of US manufactured foods.  In Europe, the figure is supposedly closer to 5%, due to enactment of progressively stricter regulations over the last decade and a half.

But it’s certainly possible that something could happen to make US public attitudes and regulations move in the direction of Europe’s. Indeed, it could be happening.

There is now a concerted effort underway to raise the risk profile of GM foods. The most conspicuous manifestation of it is a California ballot proposition to mandate that all foodstuffs containing GM foods bear a label that advises (warns) consumers of this fact.

The proposition is supported by organic food producers and sellers, who funded the effort to get the initiative on the ballot and are now funding the campaign to secure its approval, as well as by certain environmental groups, which are playing a conspicuous public advocacy role.

A label is not a ban. But it can definitely be a precursor to something more restrictive.

Consumers logically infer from “advisory” labels that there is a reason for them to be concerned.

Psychologically they tend to greatly overreact to any information about chemical risks—including information that tries to prevent them from overreacting by characterizing the risks in question as small or uncertain.  It’s thus in the nature of modest, informational regulations to breed concerns capable of supporting stronger, substantive regulation.

Dynamics of cultural cognition, moreover, can fuel this sort of escalation. If the source of initial concern is transmitted in a manner that conveys antagonistic resonances, then the resulting division of opinion among members of different groups can feed on itself.

The movement to promote concern with GM foods seems ripe with antagonistic meanings of this sort. The information being disseminated to promote attention to the risks of GM foods in general, and to promote support for the California initiative in particular, are suffused with cues (stock characters, distinctive idioms, links to already familiar sources of conflict such as nuclear power & climate change) that are likely to resonate with those who harbor an egalitarian-communitarian cultural style and antagonize those with a more a hierarchical and individualistic one.

This framing of the issue could thus end up pitting members of these two groups—already at odds over climate change, nuclear power, gun control, and various other risks—against one another.

In that case, the US will have arrived at a state of cultural conflict over GM foods that seems to be the same one European nations followed. There, small, local farmers took the lead in proclaiming the health risks of GM food products, which were being supplied by their larger, transnational industrial-farm rivals.

Egalitarian environmental activists enthusiastically assimilated this charge into their broader indictment of industry for putting profits ahead of human welfare. Among the ironies here was the impetus such political activity imparted to blocking the production of so-called “golden rice,” a nutritionally enhanced GM grain that public health advocates hailed for the contribution it could make to combating afflictions (including preventable blindness) in malnourished children in the developing world.

I don’t know or even have a particularly strong intuition on what risks GM foods pose.

But I do have a very strong opinion that a state of cultural polarization over GM food risks would be a bad thing. As myriad controversies--from the nuclear power debate of the 1980s to the climate change debate of today--have made clear, when risk issues become infused with antagonistic cultural meanings, democratic societies are less likely to enact policies that reflects the best scientific evidence, whatever it might be.

Okay. Enough for now. In the next in this two part series, I will identify one important countervailing influence that didn’t exist in Europe before it became a site of conflict over GM food risks but that does exist now. I’ll also report some data that bears on the current degree of cultural polarization that exists in the US over the risks that GM foods present.

Parts two & three in this series.

References

Anderson, K., Damania, Richard and Jackson, Lee Ann,  (September 2004). World Bank Policy Research Working Paper No. 3395. Available at SSRN: http://ssrn.com/abstract=625272 in World Bank Policy Research Working Paper No. 3395. (ed. W. Bank)2004).

Ferrari, M. Risk perception, culture, and legal change : a comparative study on food safety in the wake of the mad cow crisis. (Ashgate Pub., Farnham, Surrey, England ; Burlington, VT; 2009).

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D.M. Gentle Nudges vs. Hard Shoves: Solving the Sticky Norms Problem. U. Chi. L. Rev. 67, 607-45 (2000).

Kurzer, P. & Cooper, A. What's for Dinner? Comparative Political Studies 40, 1035-1058 (2007).

Slovic, P., Flynn, J., Mertz, C.K., Poumadere, M. & Mays, C. in Cross-Cultural Risk Perception: A Survey of Empirical Studies. (eds. O. Renn & B. Rohrmann) 55-102 (Kluwer Academic, Dordrecht, The Netherlands; 2000).

Sunstein, C.R. Laws of Fear: Beyond the Precautionary Principle. (Cambridge University Press, Cambridge, UK ; New York; 2005).

Monday
Oct082012

More R^2 envy!

To add to your "it's-not-how-big-but-what-you-do-with-itR2 file, this from Andrew Gelman:

 

Sunday
Oct072012

Checking in on the "white male effect" for risk perception 

I read a couple of interesting studies of risk and the “white male effect” recently, one by McCright and Dunlap published (advance on-line) in the Journal of Risk Research and another in working paper form by Julie Nelson, an economist at the University of Massachusetts at Boston.

The “white male effect” (WME) refers to the observed tendency of white males to be less concerned with all manner of risk than are women and minorities.  The phenomenon was first observed (and the term coined) in a study by Flynn, Slovic & Mertz in 1994 and has been poked and prodded by risk-perception researchers ever since.

WME was the focus of one early Cultural Cognition Project study. Extending study findings by Finucane, Slovic, Mertz, Flynn & Satterfield, a CCP research team (which included WME veterans Slovic & Mertz!) found that WME could be largely attributed to the interaction of cultural worldviews with race and gender. The WME was not so much a “white male effect” as a “white hierarchical and individualistic male effect” reflecting the extreme risk skepticism of men with those worldviews.

The design and hypotheses of the CCP study reflected the surmise that WME was in fact a product of “identity protective cognition.” Identity-protective cognition is a species of motivated reasoning that reflects the tendency of people to form perceptions of risk and other facts that protect the status of, and their standing in, self-defining groups.  White hierarchical individualistic males were motivated to resist claims of environmental and certain other risks, we conjectured, because the wide-spread acceptance of those claims would justify restrictions on markets, commerce, and industry—activities important (emotionally and psychically, as well as materially) to the status of white men with those outlooks.

The McCright and Dunlap article corroborates and strengthens this basic account of WME. Using political ideology rather than cultural worldviews to measure the latent motivating disposition, M&D find that the interaction of conservatism with race and gender explains a wide range of environmental risks (thus enlarging on an earlier study of their own, too, in which they focused on climate change).

M&D suggest that WME can be seen as being generated jointly by identity-protective cognition and “system justification,” a psychological dynamic that is said to generate attitudes and beliefs supportive of the political “status quo.” They defend this claim convincingly with the evidence that they collected. But I myself would be interested to see a study that tried to pit these two mechanisms against each other, since I think they are in fact not one and the same and could well be seen as rival accounts of many phenomena featuring public controversy over risks and related policy-consequential facts.

Nelson’s paper presents a comprehensive literature review and re-analysis of various studies—not just from the field of risk perception but from economics and decision theory, too—purporting to find greater “risk aversion” among women than men.

Actually, she pretty much demolishes this claim. The idea that gender has some generic effect on risk perception, she shows, is inconsistent with the disparity in the size of the effects reported across various settings. Even more important, it doesn’t persist in the fact of experimental manipulations that are more in keeping with explanations based on a variety of context-specific or culturally grounded dynamics (such as stereotype threat).

Nelson hints that the ubiquity of the “female risk aversion” claim in economics might well reflect the influence of a culturally grounded expectation or prototype on the part of researchers and reviewers—an argument that she in fact explicitly (ruthlessly!) develops in a companion essay to her study.

I got so excited by the papers that I felt like I had to so some data analysis of my own using responses from a nationally representative sample of 800 subjects who participated in a CCP study in late September.

The top figure, which reflects a regression model that includes only gender and race, shows the classic WME for climate change (the outcome variable is the “industrial strength risk perception” measure, which I’ve normalized via z-score). 

The bottom figure graphs the outcome once the worldview measures and appropriate race/gender/cultural interaction terms are added.  It reveals that WME is in truth a “white hierarchical individualistic male effect”: once the intense risk skepticism associated with being a white, hierarchical individualistic male is taken into account, there’s no meaningful gender or race variance in climate change risk perceptions to speak of.

For fun (and because the risk perception battery in the study also had this item in it), I also ran a model  for “the risk that high tax rates for businesses poses to human health, safety, or prosperity” in American society. Relative to the ones displayed in climate-change risk perceptions, the results are inverted:

In other words, white males are more worried about this particular risk, although again the gender-race difference is an artifact of the intensity of the perceptions of white hierarchical individualistic males.

That these characteristics predict more risk concern here is consistent with the identity-protective cognition thesis: because it burdens an activity connected to status-enhancing roles for individuals with this cultural identity, white hierarchical individualistic males can be expected to form the perception that high tax rates on business will harm society generally.

This finding also bears out Nelson’s most interesting point, in my view, since it confirms that men are more risk tolerant than women” only if some unexamined premise about what counts as a "risk" excludes from assessment the sorts of things that scare the pants off of white men (or at least hierarchical, individualistic ones).

Cool papers, cool topic!

References

Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Flynn, J., Slovic, P. & Mertz, C.K. Gender, Race, and Perception of Environmental Health Risk. Risk Analysis 14, 1101-1108 (1994).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, doi:   (2012).

McCright, A.M. & Dunlap, R.E. Cool dudes: The denial of climate change among conservative white males in the United States. Global Environmental Change 21, 1163-1172 (2011).

Nelson, Julie.  Are Women Really More Risk-Averse than Men?, INET Researcn Note (Sept., 2012)

Nelson, Julie.  Is Dismissing the Precautionary Principle the Manly Thing to Do? Gender and the Economics of Climate Change, INET Research Note (Sept. 2012)

 

Friday
Oct052012

ASTAR: bringing the culture of science to law--and the culture of law to science

Last weekend I attended and made a presentation at the Advanced Science & Technology Adjudication Resource Center (ASTAR).

ASTAR is an amazing concept. The goal of the program is to train a cadre of “science & technology resource” judges with the knowledge needed to preside over cases involving highly complex scientific issues.

Prospective ASTAR resource judges are awarded training scholarships after being nominated by the judiciaries in their state (or by one of the two participating federal courts). They then must complete 120-hours of training, including 60 hours of participation in regularly convened sessions that focus on one or another specific area of science. Once they get through that—if, really; there’s a “Ranger School” aspect to this—they are deemed ASTAR Fellows, and play an active role in the conduct of the program in addition to serving as their jurisdictions’ “resource judges.”

As impressive as all this sounds, seeing it in action is even more awe-inspiring.

The topic for this session was “Management Of Complex Cases Involving Environmental Hazards.” There were 22 (I think; I lost count!) 3/4-hour sessions crammed into the weekend. Most of them involved nuclear radiation and were taught—very expertly—by scientists from the Los Alamos National Laboratory (where I got to give a talk on Monday; more on that later).

The judges were dedicated students—bombarding the lecturers with insightful questions (many of which related to the readings the judges were assigned to do before arrival).

In my session, I talked about “risk perception & science communication.” The basic message I tried to impart was that cultural cognition is something that has to be understood by those who manage any process of fact-finding, particularly one involving laypeople (or experts, for that matter, making decisions outside of their own domains of expertise).

Obviously judges fit that description, and in addition to reviewing studies that show the impact of cultural cognition on public risk perceptions generally, I also showed how the same dynamics can affect jurors’ perceptions of trial evidence. (Slides here.)

Actually, courts are in many respects way ahead of other institutions, in & out of government, in preparing themselves to play an intelligent role in managing the impact of cultural cognition on factfinding.

Judges know that valid evidence doesn’t establish its own validity or otherwise ineluctably guide people to the truth.

They know that ordinary people likely can make accurate and defensible factual determinations on matters that turn on scientific and other forms of evidence-- but only if information is presented to them in a form and (just as important) an environment suited to the faculties ordinary people use to identify who knows what about what.

They also know that how to assure information gets presented in such a manner is not something that one can just figure out by personal hunch & speculation. Fitting the presentation of scientific & other evidence to the reasoning faculties of ordinary people is a topic that admits of--indeed, demands-- scientific investigation.

Accordingly, judges (or at least the best ones, like those who are part of & support ASTAR) want to be sure they keep up with the what’s scientifically known about how to promote reliable factfinding.

That’s not the case, I pointed out to the ASTAR judges, for many other actors whose job it is to help ordinary members of the public figure out facts—ones essential to planning their financial futures, to making intelligent decisions as consumers of health care, and to making informed decisions as citizens of a self-governing society.

To illustrate this point, I told the judges about the horrendous and inexcusable science-communication misadventure surrounding the HPV vaccine. Combining our CCP study on HPV vaccine risk perceptions with media reports, I reviewed how the vaccine came to be stigmatized with the divisive cultural meanings that continue to suppress vaccination rates

Merck polluted the science communication environment on that one. But that happened only because the FDA and CDC didn’t even know that the path the company was urging on them was one that would fill the atmosphere with disorienting partisan meanings. They didn’t know it was actually their job to make sure that didn’t happen.

And the reason they didn’t know those things, I’m sure, is that they were (and I’m worried likely remain) entirely innocent of the science of science communication.

In its monumentally important report on the state of forensic science, the National Academy of Sciences called for courts and legislators, law-enforcement agencies and universities all to combine to bring the “culture of science to law.”

The ASTAR judges are doing that.

What's more, they are doing it in a way that reflects the signature virtues of their own profession—including its insistence that lawyers and judges become familiar with expert or technical forms of knowledge essential to the performance of their own work, and that judges in particular assume responsibility for securing the conditions most conducive to informed decisionmaking in the courtroom.

Bringing these aspects of the culture of law to science would go a long way to remedying the institutional deficits in science communication that prevent our society from making full use of the vast bounty of knowledge at its disposal.

Thursday
Oct042012

Graphing interactions so that curious people can actually *understand* them

A friend & collaborator asked me,

So...could you send me a quick tip/reference on how to best graph interactions in regression? I'm just thinking of simple line-charts, comparing divergent slopes for two or three different groups after controlling for the other vars in the equation. I'm *sure* this is easily done, but I'm blanking on how. I mean, it's easy enough to draw the slope based on the unstandardized coefficient. And the Y-intercept to start that line from is...what? the B of the constant?

My response:

I'm sure you are asking b/c you are unsatisfied, understandably in my view, w/ the graphing recommendations that appear in references like Aiken, L.S., West, S.G. & Reno, R.R. Multiple Regression: Testing and Interpreting Interactions. (Sage Publications, Newbury Park, Calif.; 1991) &  Jaccard, J. & Turrisi, R. Interaction Effects in Multiple Regression, Edn. 2nd. (Sage Publications, Thousand Oaks, Calif.; 2003) -- even though those are definitely the best references for understanding the statistical logic of interactions & making intelligent modeling choices.

There are excellent papers that reflect general disatisfaction w/ how social scientists tend to graphically report (or not) the results of multivariate regression models. They include:
  • Gelman, A., Pasarica, C. & Dodhia, R. Let's Practice What We Preach: Turning Tables into Graphs. Am Stat 56, 121-130 (2002).
  • King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci 44, 347-361 (2000).
  • Kastellec, J.P. & Leoni, E.L. Using Graphs Instead of Tables in Political Science. Perspectives on Politics 5, 755-771 (2007).
They don't deal w/ interactions per se, but b/c they address the objective of how to make regression model results intelligible in general, you can easily derive from them ideas about strategies that work w/ models that include cross-product interaction terms.

I'll show you some examples below but here are some general tips I'd give: 

a. *don't* graph data after splitting sample (e.g., into "high," "medium" & "low" in political sophistication)... Graph the results of the model that includes all the relevant predictors & cross-product interaction terms as applied to the entire sample; those are the results you are trying to display & splitting sample will change/bias the parameter estimates.

b. consider z-score normalization for the outcome variable: you won't have to worry about the intercept (it should be zero, of course), you'll avoid lots of meaningless "white space" if values within of +/-1 or +/-2 SDs (the end points for y-axis) are concentrated within a middling portion of the  outcome measure. Also for most readers, reporting the impact in terms of SDs of the outcome variable will be more intelligible than differences in raw units of some arbitrary scale (the sort you'd get by summing the likert items to form a composite likert scale, e.g.)

c. rather than graphing *slopes*, consider plotting regression estimates based on sensibly contrasting values for the predictors (and corresponding values for the cross-product interaction term); the "practical effect" of the interaction is likely to be easier to grasp that way than comparison of visual differences in slopes

d. if you are using OLS to model responses to a likert item, consider using ordered logit instead -- maybe you should be doing this anyway, but in any case, probabilities of responding at particular level (or maybe range of levels; say "agree either slight, moderately or strongly vs disagres slighly, modreately, or strongly") conditional on levels of predictor & moderator are graphically more intelligible  than estimated values on an arbitrary continuous scale.

e. consider graphing estimated *differences* (& corresponding CIs) in the outcome variable at different levels of moderator; e.g, if difference increases between subjects who are from different groups (or who vary  +/- 1 SD on some continuous predictor) conditional on whether the value of some continuous moderator, then use bar graph w/ CIs or some such to show how much greater the estimated difference between the two groups is at the two levels of the moderator 

f. consider monte carlo simulation of estimated impact of contrasting sets of predictors & moderators (& associated interactions); do kernel-density plots for 1,000 or 2,000 values of each -- it's a *really* good way to show both the contrast in the estimates & the precision of the estimates (much better than standard CIs). See King et al. above 

g. usually prefer connected lines to bar graphs to display contrasts; former are more comprehensible

h. in general, don't use standardized regression coefficients but do center continuous predictors (or convert them to z-scores) so that people who are reading the table can more readily interpret them

Have attached [reproduced below] a bunch of CCP study examples that reflect one or another of these strategies or related ones. BTW, of course, all of these reflect things that I learned  to do from collaborating w/ Don [Braman], who like all great teachers teaches people how to teach themselves.

note: all examples below are clickable thumbnails that expand to larger size for closer inspection

 

 

 

Tuesday
Oct022012

The aporetic judge

Judge Mark Kravitz, of the Federal District Court for the District of Connecticut died yesterday from Lou Gehrig’s disease. He was 62.

In my 2011 Harvard Law Review Foreword, I described a style of judicial reasoning and opinion writing that I characterized as “aporetic.”

Aporia is an ancient Greek term referring to a mode of argumentative engagement that evinces comprehension of an issue’s ineradicable complexity.

Aporia is not a state of uncertainty or equivocation (indeed, it’s not really anything that can be described by a single English word I can think of). One can reach a definitive conclusion about a problem and still be aporetic in assessing it.

But if one adopts a position that denies or purports to dispel the difficulty that a truly difficult issue poses, or that fails to recognize the undeniable weight of the opposing considerations on either side, then one isn’t being aporetic. Indeed, in that case, one is actually not getting the issue at hand, no matter how one resolves it.  The effacement of real complexity signifies a deficiency in intellectual character.

Judicial reasoning—of the sort that is expressed openly in court opinions—tends not to be aporetic. Of course, most of the issues that courts resolve are not fraught with complexity. But even in those that really are, judges tend to effect a posture of unqualified, untroubled confidence.

This form of comic overstatement is most conspicuous in Supreme Court opinions. Every relevant source of guidance (text, purpose, precedent, policy, tradition, “common sense” etc.) indisputably, undeniably converges on a single conclusion, the Justices emphatically insist. We are supposed to believe this even though the Court’s primary criterion for review is the existence of an issue that has divided lower courts, and even though the Justices themselves often disagree about which outcome in a particular case is supported indisputably, undeniably by every conceivable consideration.

But actually, there’s nothing funny about such puffing. On the contrary, it’s disturbing.

Hyperbolic certitude diminishes the legitimacy of the law by conveying to those who are disappointed by the outcome of a case that the judge who decided it was biased, and intent on deception.

It also denigrates reason. It embodies in the law an attitude that breeds cynicism and dulls reflection.

In my Foreword, I defended an alternative, aporetic idiom of judicial reasoning that recognizes rather than effaces genuine complexity. Aporia in judicial reasoning, I argued, should be seen as a judicial virtue—because in fact it is. Being able to see complexity and being moved to engage it openly are character dispositions, and they conduce to being a good judge.  A judge who is committed to being just will experience aporia when he or she must decide a genuinely complex case; and by resort to aporetic reasoning in his or her opinion, that judge assures citizens generally that their rights are being determined by someone committed to judging impartially.

Mark Kravitz had this virtue. In fact, for me, he was and remains the model of it.  Before I had occasion to observe him as a judge, I had (despite many years studying and practicing law) only a dim, inchoate sense of judicial aporia; when I try to make the picture as vivid and compelling for others as it now is for me, I try to describe Mark Kravitz.

Last April, Judge Kravitz decided a case—one of his last—in which members of the Occupy protest movement brought a suit to try to halt the imminent, forcible removal of their tent city from the New Haven Green. He denied their motion for an injunction.

No one can read his opinion, though, and escape the conclusion that the issues it presented were difficult.  Indeed, in a tone that was rare in his opinions, Judge Kravitz expressed anger at the city’s attorneys for attempting to avoid—and thus for seeking to tempt the court to avoid—acknowledging the seriousness of the Occupy protestors’ position. Dismissing the city attorneys’ argument that the protestors’ encampment did not qualify as “speech” protected by the First Amendment, the judge wrote: “One would have to have lived in a bubble for the past year to accept Defendants' claim that Occupy's tents ‘could simply mean that the plaintiffs enjoy camping.’ ”

The Occupy movement, in New Haven as elsewhere, aims to exemplify its message: to express the desire that the economically disenfranchised become more central to American public life by literally placing the economically disenfranchised in the center of America's public spaces. Defendants need not deny the obvious political expressivity of this act in order to argue that reasonable limits on acts like this may still be necessary and appropriate.

The protestors deserved an opinion that acknowledged their dignity and public spirit. As disappointed, moreover, as they no doubt were to lose the case, I suspect they likely will be able to make good use of the portion of the opinion I’ve quoted (likely they will see the value, e.g., of including it in a demonstration-permit application, something the New Haven protestors denied the authority of the City to require as a condition of convening a protest on the Green).

Those inclined to distrust the City deserved to know that its stated reasons for ending the protest were being scrutinized by a decisionmaker intent on being fair. They got that, too, from the quoted language, and from the numerous points in the opinion that acknowledged the force and seriousness of the protestors’ arguments even in the course of deciding against them....

We all deserve judges who are unafraid to see, and unafraid to tell us they see, genuine complexity. We have one less judge of this character today than we had yesterday. But by furnishing us such a clear and inspiring picture of what this judicial virtue looks like, Mark Kravitz gave us a resource we can use to assure that there are many, many more aporetic judges in the future than we have ever had in the past.

 

Tuesday
Sep252012

Tragedy of the Science-Communications Commons

Giving lecture today at Hampshire College. Here's the summary:

Culture, Rationality, and Risk Perception: the Tragedy of the Science-Communication Commons

From climate change to the HPV vaccine to gun control, public controversy over the nature of policy-relevant science is today a conspicuous feature of democratic politics in America. A common view attributes this phenomenon to the public’s limited comprehension of science, and to its resulting vulnerability to manipulation by economically motivated purveyors of misinformation. In my talk, I will offer an alternative account. The problem, I will suggest, is not a deficit in rationality but a conflict between what’s rational at the individual and collective levels: ordinary members of the public face strong incentives – social, psychological, and economic – to conform their personal beliefs about societal risk to the positions that predominate within their cultural groups; yet when members of diverse cultural groups all form their perceptions of risk in this fashion, democratic institutions are less likely to converge on scientifically informed policies essential to the welfare of all. I will discuss empirical evidence that supports this analysis--and that suggests potential strategies for securing the collective good associated with a science communication environment free of the conflict between knowing what is known and being who we are.

The talk will feature data from our study of science-comprehension and cultural polarization on climate change and our experimental examination of how using geoengineering as a framing device can promote more open-minded engagement with climate science.

Slides here.

Monday
Sep242012

Climate-change risk communication and risk management for businesses

I haven't had a chance to read this really interesting-looking book yet (I just ordered it) but I find the simple existence of it fascinating.

As a national political issue, the issue of global climate change has much more relevance to people as a focus for conveying to themselves & others that they belong to a certain cultural "team" than it does as any sort of thing that might affect their or anyone else's health, welfare, etc. (now or in the future).  As individual consumers, voters, advocates, etc., ordinary members of the public just don't matter enough to have any effect on the risks posed by global climate change (or by ill-considered responses to it). On the other hand, how their beliefs relate to those of others in their community matters a ton. We judge people's characters by the positions they take on whether climate change is a "a global crisis" or a "massive hoax." Being out of sync with those on whom we depend for support -- materially, emotionally, psychically -- can be devestating. So people tend to extract from the "evidence" on climate change the information that really matters: what is someone like me supposed to think.

But this sort of dynamic is peculiar, really, to the framing of "climate change" as a national or global policy issue. When people engage issues of climate-change impact in other settings, the consequences and meanings can be very different.

This is a point I have been stressing recently in advocating more attention to political decisionmaking surrounding local adaptation (here & here, e.g.), where people engage the issue as property owners or scare-resource consumers, where they people they are engaging are their neighbors, and where the language they have for sorting these issues out fits comfortably with their cultural identities. Those are conditions much more hospitable to open-minded, constructive engagement with climate science.

Well, the "business risk" setting is another that has advantages like these. Here people are again engaging the issue not as a symbolic one of significance to their identities as members of tribes or teams but as a financial one that could affect them in their capacity as economic actors. Here too there is a language for addressing the matter that all interested parties share and that doesn't evince hostility to or contempt for the identities of some.

What's more, the very appearance of this sort of engagement with the issue taking place might arguably be expected to have a positive impact on discussion of climate science in other places. It's tangible evidence that people w/ a dollar-and-cents stake & not just a political-ideological one are taking the science very seriously. That in itself supplies a resoruces that can be used, I think, to help counteract the suspicion and distrust that have poisoned the discussion environment at the national level.

Having said all this, I do think the length of the hair of the guy on the book cover might arouse suspicion in the minds of typical hierarchical individualists...

Saturday
Sep222012

What should we teach kids (& others) about cultural conflict over science? And should science education aim to "overcome" cultural cognition?

I got into an interesting email exchange with my friend Mark McCaffrey, the Programs and Policy Director at the National Center for Science Education. (One of the many things for which I'm grateful to Mark for doing is disabusing anyone of the misconception that our Nature Climate Change study on science literacy and cultural polarization implies that science literacy is irrelevant to enlightened democratic decisionmaking.)

In the course of the correspondence, Mark noted

[I]n our arena of education, one of the top three issues for teachers in "what's going to happen and what can we do about it locally?", along with "how do scientists know what they know?" and "how do I deal with controversy in the classroom about climate change?"  Bringing local context (geography and culture) obviously is imperative.

He also stated,

I'm curious, in part due to a conversation [after the recent University of Colorado conference on culture and climate change], on what role education has in shaping and helping transcend cultural frames.

As Mark’s points often do, these ones provoked a chain of reflections on my part, which included these:

A. What to teach kids (and other curious people) about the nature of cultural conflict over science

 The report on the experience & interest of the teachers is fascinating. It makes me think of what a climate scientist told me recently.  He reported that when he talks to members of the public including student audiences, one thing they want to know is why there is such much controversy; that's not what they expect to see, not what they associate with scientific understanding of an issue & they find it mysterious & puzzling. 

I had two reactions:

(1) It's amazing -- inspiring, even -- to see that citizens are curious about this phenomenon, that they want to understand it; they (or some at least) notice and have the same reaction to this peculiar social phenomenon that they (or some) have to an intricate or surprising natural one. That sensibility is one of the most distinctive and admirable characteristics of our species; the commitment to giving people the resources to satisfy this sort of interest -- the education in science, certainly, to be able to comprehend the sort of knowledge that exists but also ready & reliable access to whatever knowledge has been amassed -- is one of the signatures qualities of a good society.

If in fact people -- including high school students (or maybe even younger ones) -- have this reaction to public conflict over science, then I think it would be very very worthwhile to figure out how to give them the resources a curious and intelligent citizen could use to participate in whatever collective knowledge we have about it. Certainly, I'd be happy to give advice to any science educator who thought this was a worthy objective. That's not the sort of education I do, really, but if someone who does do it wanted to have someone to work with who could try to help him or her identify what to try to make comprehensible to people, I'd be delighted to help. 

(2) When I heard this report -- of the citizens (including, again, high school students) who were confused about this issue, it made me think too that the chance to answer the question is itself a sort of civic opportunity to contribute to a climate for discussion that itself helps, if not to dispel the confusion, at least to ameliorate the negative impact it has on common engagement with contested science issues.

In response to the scientist who reported on the curiosity of the public to make sense of the climate-change controversy, another person in the conversation noted that there are people strongly committed to misleading members of the public and who were filling the media with misinformation.

I don't deny that but I don't think it is the aspect of the problem that it is most important or useful for these curious people to understand. What is is that the conflict about climate change is the signature of a kind of degradation of the science communication environment the quality of which is essential to the interest we all have in being able reliably to ascertain what is collectively known.

There will always be more that is collectively known (through science) than we can meaningfully comprehend (life is short, and the world complex enough to demand specialization). As a result, we have to make use of our ability to identify and properly interpret the signs around us about who knows what about what.

Ordinarily we are great at that; but sometimes something goes wrong -- maybe from deliberate efforts to confuse but often simply as a result of misadventure and miscalculation -- that creates conflict and chaos in those signs. That sort of state is something that inevitably confuses all of us; it is something we are all vulnerable to; and it is something the avoidance of which is critical to our common interests--however we feel about climate change, and however we feel about moral and social issues of the sort that inevitably divide people who enjoy freedom of thought.

We have to try to figure out how to respond to climate change as natural phenomenon, and as an issue that divides us. But we need to think more generally  about what we can do to protect our science communication environment from the sort of contamination that accounts for this peculiar and pernicious form of conflict over what we know.... 

B. Does knowing what is known by science require teaching students to "overcome" cultural cognition?

On overcoming cultural frames with education ... My reaction is "yes & no."  

Yes in the sense that I think the sort of influences associated with cultural cognition are not "all there is" -- by any means! -- to engaging scientific information, and are qualified in particular by "professional habits of mind." That is, I think part of the nature of professionalization is that it imbues in those who are subject to it a set of conceptual frameworks, a collection of reasoning skills, and also a cluster of dispositions (some reflective & conscious but others more or less automatic and even emotional) that help them reliably to engage information in the manner suited to accomplishment of the expert reasoning task at hand.

This is so for scientists; but it is true for doctors, lawyers, journalists etc. These habits of mind will usually steer professionals away from the sorts errors they might make were they to engage information through the mechanisms distinctive of cultural cognition.

But I don't think that it is feasible for everyone to attain the professional habits of mind of the expert with respect to every domain in which they will need to participate in or have access to expert knowledge. Even those who have experts' professional habits of mind in one domain will need to make use of information outside of it, in ones in which the habits of mind most suited to engaging information are different from the ones they use in theirs.

Here, then, is where I come to the "no" part of the answer. Because we must in fact participate in or apprehend what is known in domains in which we lack the substantive knowledge and habits of mind distinctive of those who produce it, we will -- all of us-- need to exercise a distinct faculty suited to ascertaining what is collectively known (one that often involves being able to identify who knows what they are talking about).

This is conjecture on my part but I am of the view that the dynamics associated with cultural cognition are integral to the operation of that faculty. We figure out what is known by accessing cues of certification that are native to affinity groups within which we are comfortable and socially competent. The groups are diverse (how could they not be in a pluralistic society?); but they are all generally *reliable* in guiding their members to an accurate understanding of what is collectively known -- by science and by other expert ways of knowing (which groups could possibly persist that failed to put their members in touch with such knowledge, which is critical, in fact, to individual well-being).

So "cultural frames" are not something to be overcome in the interest of making us able to know things; they are vital pieces of equipment that we need in order to participate in what is collectively known. The most one could do, I suppose, is replace them with something else -- but the other thing would not be professional habits of mind, since those will always be out of reach for most and in any case domain specific -- but rather some other regime of social certification.

I don't see cultural cognition as a bias, or even as a "heuristic." It is an intrinsic component of human rationality. But its reliable operation presupposes certain conditions -- what I would characterize, have characterized already in this msg, as an uncontaminated or clean "science communication environment." The goal is not to "overcome culture"; it is to protect the conditions in which culture can make the valuable -- and amazing -- contribution that it does to our being rational beings capable of acquiring knowledge through aggregated, cumulative inquiry into the workings of nature.

****

Mark responded, predictably and characeristically, with additional thoughtful comments relating to whether these sorts of ideas (which I think he himself might qualify or revise; he is the one with the professional habits of mind suited to educating people, including science educators) might be turned into concrete directives and materials relevant to science education. That would be fantastic in my view. I'd certainly be willing to help him or other science-education experts explore this possibility! 

Friday
Sep212012

Followup exchange on Sunstein op-ed & science communication

I got a thoughtful email from a natural scientist who said that he and some colleagues had been discussing the Sunstein NYT op-ed as well as the reflections I posted on it the other day and had some questions:

I apologize in advance if my questions are too basic or clumsy, but I’m a little out of comfort zone as a physical scientist.  What you’ve described in various places as to what’s driving polarization makes perfect sense to me, however, the primary question I have is if this is somehow a uniquely American phenomenon.  The reason I ask is with exceptions of course the rest of the world as I’ve been told and witnessed at times does not experience the “questioning of the science” to the degree we seem to enjoy.  I’m approaching it through the lens of climate, but it may be true of other contentious scientific issues as well.  In your sampling in various studies have they been international samples or just American?  So is this effect somehow tied to our current American societal system or is it more general for all of humanity?  And has this effect been increasing or becoming more pronounced and moving into new spheres of science over time?  I know some of the history of previous “debates” such as evolution and cigarette smoke, but is it becoming more pervasive?  This is really a fascinating, yet crucially important topic for me.  And frankly it’s been humbling as a scientist that my word is not sacrosanct and a small business owner or a minister may actually be a more effective communicator of the science than I am.

Here is my response:

Nothing at all simplistic about your questions! I'll try my best to answer...

1. The science of science communication is large, diverse, growing, and provisional. First point to realize is that there is a pretty decent-sized literature on science communication & public risk perception. It's impossible -- in a good way -- to be able to advance concrete points w/o making judgments about what findings strike one as the most supported or the most pertinent to the issue at hand. I'll do that in responding to your inquiries. But I don't want, in doing that, to give the impression that "this is all there is to say" or "any other response has got to be wrong" etc.  Actually, it's clear to me that you are already familiar with good portions of this work, so this sort of boilerplate proviso is likely completely unncessary here; but I do feel it's important to recognize both that there are lots of live conjectures & hypotheses in play, and also lots of hard-working, smart empirical researchers whose work is well worth consulting!

2. The climate-change conflict is not a singular phenomenon. Ok... It's understandable when viewing  the phenomenon  "through the lens of  climate"  to form the impression that the sort of conflict we see over climate-change is singular in all kinds of ways-- that it  applies only to that issue, e.g., or is "strange US thing," or reflects "new & emerging skepticism about science." I actually don't think any of these things is true, and that that's why it is important to widen the lens, as it were.

3. The emergence of the study of public risk perceptions and science communication -- over three decades ago!  The study of disconnects between public opinion on science on environment and technological risks has been around for at least 35 yrs. Moreover, the early impetus for it was the public's resistance to the predominant -- I think fair to say "consensus" view -- among scientists that nuclear power risks (particular storage in deep geologic isolation) involved low risks fully amenable to effective regulation. Paul Slovic, Bernard Fischoff & others formulated the "psychometric theory" of risk, which emphasized various dynamics neglected by the then-prevailing frameworks in decision science -- from cognitive biases of one sort or another, to distinctive qualitative valuations of risk that are independent of the sorts of things that figure in policymaking "cost-benefit"  analysis. E.g., Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S. & Combs, B. How Safe Is Safe Enough? A Psychometric Study of Attitudes Toward Technological Risks and Benefits. Policy Sci. 9, 127 (1978); Slovic, P., Fischhoff, B. & Lictenstein, S. Facts versus Fears,  in Judgment Under Uncertainty: Heuristics and Biases. (eds. D. Kahaneman, P. Slovic & A. Tversky). pp. 163-78;  Slovic, P. Perception of risk. Science 236, 280-285 (1987). This work also looked at public concerns over risks involving food attitives, water pollution, air pollution and the like. 

4. Cultural theory and cultural cognition.  The cultural theory of risk, associated with Mary Douglas & Aaron Wildavsky (Douglas, M. & Wildavsky, A.B. Risk and culture: An essay on the selection of technical and environmental dangers. (University of California Press, Berkeley; 1982)) dates from the nuclear and clean-air debates, too. It was at that time an alternative to the psychometric theory. But "cultural cognition theory," with which Slovic has been prominently involved, essentially marries the two. See Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012). The idea is that the mechanisms featured in the psychometric theory can help to fill in why there are the sorts of relationships that Douglas posits between cultural outlooks and risk perceptions; in addition, Douglas's framework, which emphpasizes systematic differences in perception of risk and conflict over them between groups, furnishes a basis for undrestanding how one and the same set of mechnisms form the psychometric theory can produce division and controversy in public debates. 

5. Cross-cultural cultural cognition. These dynamics are *not* confined to the U.S. There have been plenty of studies using methods associated with the cultural theory of risk to examine conflicts over risk perception in Europe. Recently, the Cultural Cognition Project research group has been using its measures to examine conflicts over climate change in other countries, including Australia and the UK. There is also recent work emerging in Canada using measures similar to ours (I'm going to post a blog essay on this soon). 

6. We aren't culturally divided over the value of what scientists have to say; we are divided over what scientists are saying.  It is also a mistake, in my view, to associate any of these dynamics with skepticism about or hostility toward science. In the US, in particular-- as likely you know--, there is widespread public confidence and trust in scientists. Members of the public who are culturally divided over risks like climate change & nuclear power are not divided over whether scientific consensus should be normative for risk regulation and policymaking; they are divided over what scientific consensus is. This happens becuase determining what "most scientists believe" is not something any more amenable to direct observation for ordinary people than melting glaciers; people have to get the information by observing who is saying what in public discussion, and in that process, all the mechanisms that push groups apart are going to skew impressions of what the truth is about the state of scientific opinion.

CCP has studied this very issue, finding that groups divided over climate change process evidence of scientific consensus on that issue and various others in biased ways and thus form systematically opposed, and very unreliable, perceptions of what the state of scientific consensus is. See Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011). In my view, the perception that "climate skeptics are anti-science" is itself a product of culturally motivated reasoning, and the persistence of this view distracts us from addressing the real issue and likely even magnifies the problem by needlessly insulting a large segment of the public.

7. The existing science of science communication doen't tell us what to do; rather it furnishes us with models and methods that we can and must use to figure that out. On "what to do": The literature is filled with potentially helpful strategies. But I really think that it's a mistake to think that it's useful to just sift through the literature & boil it down into lists of "dos & don'ts," e.g, "use ideologically/cuturally congenial communicators!"; "know your audience!"; "use vivid images to get attention but beware vivid images b/c they scare people & numb them!"  This is a mistake, first, because in fact these sorts of admonitions can easily cause people to blunder -- e.g., to make a ham-handed effort to line up some sock-puppet advocate, whose appearance in the debate is such an obvious set up that it drives people the wrong way.

It's also a mistake b/c the "dos & don'ts," even when they are exactly right, are just too general to be of real use. They reflect conclusions drawn from studies that are highly stylized and aimed at figuring out the real mechanisms of communication. That sort of work is really important -- b/c if you don't start out with the mechanisms of consequence, you'll get nowhere. But they don't  in themselves tell you want to do in any particular situation b/c they are too general, too (deliberately) remote from the details of particular communication environments.  

In other words, they are *models* that those who *are* involved in communication, and who know all about the particulars of the situation they are involved in, should be guided by as they come up with strategies fitted to their communication needs. And when they do that-- when they try to reproduce in their real world settings the effects that social scientists captured in their laboratory models-- the social scientists should be there to help them test their conjectures by observing and measuring and by collecting information. They should also collect information on how that particular field experiment in science communication worked and memoralize it and share it w/ other communicators -- for whom it will be another, even richer model of what to try in their own particular situation (where again, they should use evidence-based appraoches)... 

You see what I'm getting at, I'm sure.  What I've just described, btw, is the process by which medicine made the transition form an experienced based craft to a science-, evidence-based one. That's got to be the next step in the science of science communication. I really urge scientists, science communicators, and scholars of science communication to take this step; and I'm happy to contribute in any way I can!

Wednesday
Sep192012

The "local-adaptation science communication environment": the precarious opportunity  

I’ve been MIA for a while – but with an emphasis on “IA.” Over the last couple of weeks I’ve had the chance, in a variety of public & semi-public fora, to advocate making local, adaptation-focused political decisionmaking the focus of evidence-based science communication initiatives.

That setting, I believe, offers tremendous opportunities for simultaneously using the science of science communication to promote enlightented self-government and acquiring even more scientific knowledge about how science communication works in democratic societies.

At the same time, the cost of failing to “go local, and bringing our empirical toolkit” could be substantial.

To recap, here’s the core argument: Essentially every one of the pollutants that make the science communication environment toxic for engaged public deliberations at the national level are absent or largely attenuated at the local.

At the national level, positions on climate change have become indelibly suffused with meanings distinctive of rival cultural factions.

The language in which competing positions are advanced—the  pious scolding of the (unworldly, vulgar) members of the public who care more about the fate of “Paris Hilton and Anna Nicole Smith” than about the fate of the planet; the denunciation of climate scientists as agents of “global socialism” and enemies of “global free markets”—corroborates that the climate debate is “in reality,” just as its protagonists insist, “a struggle for the soul of America.”

Because being out of step with one’s cultural group in struggles over the nation’s "soul" can carry devastating personal consequences, and because nothing a person believes or does as an individual voter or consumer can affect the risks that climate change pose for him or anyone else, it is perfectly predictable—perfectly rational even—for people to engage the issue of climate change as a purely symbolic or expressive issue.

In contrast, from Florida to Arizona, from New York to Colorado and California, ongoing political deliberations over adaptation are affecting people not as members of warring cultural factions but as property owners, resource consumers, insurance policy holders, and tax payers—identities they all share. The people who are furnishing them with pertinent scientific evidence about the risks they face and how to abate them are not the national representatives of competing political brands but rather their municipal representatives, their neighbors, and even their local utility companies.

What’s more, the sorts of issues they are addressing—damage to property and infrastructure from flooding, reduced access to scarce water supplies, diminished farming yields as a result of drought—are matters they deal with all the time. They are the issues they have always dealt with as members of the regions in which they live; they have a natural shared vocabulary for thinking and talking about these issues, the use of which reinforces their sense of linked fate and reassures them they are working with others whose interests are aligned with theirs.

In this communication environment, people of diverse values are much more likely to converge on, rather than become confused about, the scientific evidence most relevant to securing the welfare of all.

That’s exactly why in places like Arizona and Florida—where no one, Democrat or Republican, is campaigning for Congress or the Senate on a platform to “fix global climate change”—state officials have initiated networks of stakeholder and related decisionmaking processes aimed at addressing climate adaptation at a local level. In those deliberations, moreover, the same forms of scientific evidence that are disparaged as part of a “hoax” are central to planning on projects as diverse as the building of flood gates to the design of off-shore nuclear power facilities.

That’s an account of the opportunity that local, adaptation creates to restore the value of science as the currency of enlightened democratic decisionmaking. But it would be a huge mistake to assume that the opportunity will naturally or inevitably be taken advantage of.

Indeed, there is a considerable risk that the pollution that has contaminated the national science-communication environment will spill over and contaminate the local one as well.

We saw this happen in North Carolina, e.g., where the state legislature enacted a provision that bars use of anything but “historical data” on sea-levels in state planning. This happened because proponents of adaptation there failed to do what those in the neighboring state of Virginia were able to do in creating a rhetorical separation between the issue of local flood planning and “global climate change.” Polarizing forms of engagement have bogged down municipal planning in some parts of Florida—at the same time as progress is being made elsewhere.

Actors committed to the effective use of valid science—including municipal planners, farmers, utility companies, and conservation groups-- need  science communication help and in fact are asking for it.  If those interested in formulating and implementing effective “communication strategies” focus only on national public opinion, they will be effectively turning their back on these people.

At the same time, if we respond by sending them nothing more than “best practice” manuals filled with generalities (“know your audience!”; “gain attention with emotionally compelling images—but beware numbing people with emotionally compelling images!”), we’ll be offering them little that can actually help them.

By use of stylized lab studies, the science of science communication has generated critical insights about valid psychological mechanisms. Such work remains necessary and valuable.

But in order for the value associated with it to be realized, social scientists must become experts on how to translate these lab models into real, useable, successful communication strategies fitted to the particulars of real-world problems. To do that, they will have to set up labs in the field, where informed conjectures based on indispensable situation sense of local actors can form the basis for continued hypothesizing and testing.

Not only do those committed to enlightened policymaking need the science of science communication to succeed. The science of science communication needs to put itself at the disposal of those actors in order for it to continue to generate knowledge. 

Tuesday
Sep182012

Sunstein on "biased assimilation" & ideologically credible messengers

Many thanks to all the people who sent me emails asking if I saw Cass Sunstein's op-ed on "biased assimilation" today in NYT: they assured I didn't miss a good read! 

Sunstein's basic argument is that inundating people with "balanced information" doesn't promote convergence on sound conclusions about policy because of "biased assimilation." For this, he cites (via the magic of hyperlinked text) the classic 1979 Lord, Ross & Lepper study on capital punishment.  

Sunstein's proposal for counteracting this dynamic is to recruit ideologically congenial advocates to challenge people's preexisting views: "The lesson for all those who provide information," he concludes, is "[w]hat matters most may be not what is said, but who, exactly, is saying it."

Op-ed word limits and the aversion of editors to even modest subtlety make simplification inevitable.  Given those constraints, what Sunstein manages in 800 words is a nice feat.

But being free of such constraints here, I'd say the growing "science of science communication" literature suggests a picture of public conflict over science that is simultaneously tighter and richer than the one Cass was able to present.

To begin, "biased assimilation" doesn't itself predict that identity-congruent messengers should be able to change minds. LR&L find only that only that people will construe information on controversial issues to reinforce what they already believe--"confirmation bias" essentially.

I believe the phenomenon at work in polarized science debates is something more general: identity-protective motivated reasoning. This refers to the tendency of people to conform their processing of information -- whether scientific evidence, policy arguments, the credibility of experts, or even what they see  with their own eyes -- to conclusions that reinforce the status of, and their standing in, important social groups.

"Biased assimilation" might sometimes be involved (or appear to be involved) when identity-protective motivated reasoning is at work. But because sticking to what one believes doesn't always promote one’s status in one’s group, people will often be motivated to construe information in ways that have no relation to what they already believe.

E.g., in a study that CCP did of nanotechnology risk perceptions, we did find that individuals exposed to "balanced information"  became culturally polarized relative to ones who hadn't received balanced information. But those in the "no-information" condition, most of whom knew little about nanotechnology, were not themselves culturally divided; they had priors that were random with respect to their cultural views. Thus, the subjects exposed to balanced information selectively assimilated it not to their existing beliefs but to their cultural predispositions--which were attuned to affective resonances that either threatened or affirmed their groups' way of life. 

Or consider a framing experiment we did involving "geoengineering." In it, we found that individuals culturally predisposed to be dismissive toward climate-change science were much more open-minded in their assessment of such sciencewhen they were first advised that scientists were proposing research into geoengineering and not only stricter CO2 limits as a response to climate change. 

Biased assimilation -- the selective crediting or discrediting of information based on one's prior beliefs -- can't explain that result, but identity-protective motivated reasoning can. The congeniality of geoengineering, which resonates with pro-technology, pro-market, pro-commerce values, reduced the psychic cost of considering information to which individuals otherwise would have attached value-threatening implications--such as restrictions on commerce, technology, and markets.

Identity-protective motivated reasoning also explains the persuasiveness of ideological congenial advocates that Sunstein alluded to at the end of his column.  The group values of the advocate are a cue about what position is predominant in a person's cultural group. If that cue is strong and credible enough, then people will go with the argument of the culturally congenial advocate even if the information he is presenting is contrary to their existing beliefs. 

We examined this in a study of HPV-vaccine risk perceptions. In that experiment, we found that "balanced information" did polarize subjects along lines that reflected positions (and thus existing beliefs) predominant within their cultural groups. But when arguments were attributed to "culturally identifiable experts" – fictional public health experts to whom we knew subjects would impute particular cultural values -- individuals consistently adopted the position advocated by the expert whose values they (tacitly) sensed were most like theirs.   

This study only shows not only that the influence of culturally congenial experts is distinct from, and stronger than, biased assimilation. It also helps to deepen our understanding of why.

Indeed, reliable understandings of “why”-- and not merely analytical clarity--is what's at stake here. As I'm sure Cass would agree, one needs to do more than reach into the grab bag of effects and mechanisms if one wants to explain, predict, and formulate prescriptions. One has to formulate a theoretical framework that integrates the dynamics in question and supplies reliable insights into how they are likely to interact. Identity-protective cognition (of which cultural cognition is one conception or, really, operationalization) is a theory of that sort, whereas "biased assimulation" is (at most) one of the mechanisms that theory connects to others.

If I'm right (I might not be; show me the evidence that suggests an alternative view) to see identity-protective cognition as the more general and consequential dynamic in disputes about policy-relevant science, moreover, then it becomes important to identify what the operative group identities are and the means through which they affect cognition.  Sunstein suggests ideological affinity is important for the credibility of advocates. Well, sure, ideological affinity is okay if one is trying to measure identity-protective motivated reasoning. But for reasons I’ve set forth previously, I’d say cultural affinity is generally better -- if we are trying to explain, predict and formulate prescriptions that improve science communication. 

As for whether recruiting ideologically congenial advocates is the "lesson" for those trying to persuade "climate skeptics," that's a suggestion that I'm sure Cass would urge real-world communicators to consult Bob Inglis about before trying.  Or Rick Perry and Merck.

These two cases, of course, are entirely different from one another: Inglis took a brave stance based on how he read the science, whereas Perry took a payment to become a corporate sock-puppet. But both cases illustrate that deploying culturally congenial advocates to spread counter-attitudinal messages isn't a prescription that emerges from the literature in nearly as uncomplicated a manner as Sunstein might be seen to be suggesting.

The point generalizes. It's important to to attend to the wider literature in the science of science communication because the lessons one might distill by picking out one or another study in social psychology risks colliding head on with opposing lessons that could be drawn from others examining alternative mechanisms.

Actually, I'm 100% positive Sunstein would agree with this. Again, one can't possibly be expected to address something as complex as reconciling off-setting cognitive mechanisms (here: "trust the guy with my values," on one hand, vs. "excommunicate the heretic" & the "Orwell effect, on the other) in the cramped confines of an op-ed.

Okay, enough of that. Going beyond the op-ed, I'm curious what Sunstein now thinks about the relationship between "biased assimilation" --and identity-protective motivated reasoning generally -- and Kahneman's "system 1/system 2" & like frameworks of dual process reasoning. 

This was something on which a number of CCP researchers including Paul Slovic, Don Braman, John Gastil & myself, debated Cass in a lively exchange in the Harvard Law Review before he took on his post in the Obama Administration. Sunstein's position then was that cultural cognition was essentially just another member of the system 1 inventory of "cognitive biases."  

But research we've done since supports the hypothesis that culturally motivated reasoning isn't an artifact of “bounded rationality,” as Sunstein puts it. On the contrary, cultural cognition recruits systematic reasoning, and as a result generates even greater polarization among people disposed to use what Kahneman calls “system 2” processing.

Indeed, in our Nature Climate Change paper, we argued that this effect reflects the contribution that identity-protective cognition makes (or can make) to individual rationality. It's in the interest of individuals to conform their positions on climate change to ones that predominate within their group: whether an individual gets the science "right" or "wrong" on climate change doesn't affect the risk that climate change poses to him or to anyone else-- nothing he does based on his beliefs has any discernable impact on the climate; but being "wrong" in relation to the view that predominates in one's group can do an individual a lot of harm, psychically, emotionally, and materially. 

The heuristic mechanisms of cultural cognition (including biased assimilation, cultural-affinity credibility judgments) steer a person into conformity with his or her cultural group and thus help to make that person's life go better. And being adept at system 2 only gives such a person an even greater capacity to "home in" on & defend the view that predominates in that person's group. 

Of course, when we all do this at once, we are screwed. This is what we call the "tragedy of the risk perception commons.” Fixing the problem will require a focused effort to protect the science communication environment from the sort of toxic cultural meanings that create a conflict between perceiving what is known to science and being who we are as individuals with diverse cultural styles and commitments.

I’m glad Cass is now back from his tour of public service (and grateful to him for having taken it on), because I am eager what he has to say about the issues and questions that risk-percepton scholars have been debating since he’s been gone!

 

References 

Dunning, D. & Balcetis, E. See What You Want to See: Motivational Influences on Visual Perception. Journal of Personality and Social Psychology 91, 612-625 (2006).

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92 (Jan. 9, 2012).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, advance on line publication, http://www.nature.com/doifinder/10.1038/nclimate1547 (2012).

Kahan, D.M., Slovic, P., Braman, D. & Gastil, J. Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review 119, 1071-1109 (2006).

Kahneman, D. Thinking, fast and slow, Edn. 1st. (Farrar, Straus and Giroux, New York; 2011).

Kunda, Z. The Case for Motivated Reasoning. Psychological Bulletin 108, 480-498 (1990).

Lessig, L. The Regulation of Social Meaning. U. Chi. L. Rev. 62, 943-1045 (1995).

Lord, C.G., Ross, L. & Lepper, M.R. Biased Assimilation and Attitude Polarization - Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology 37, 2098-2109 (1979).

Sherman, D.K. & Cohen, G.L. The Psychology of Self-defense: Self-affirmation Theory, in Advances in Experimental Social Psychology, Vol. 38. (ed. M.P. Zanna) 183-242 (2006).

Sunstein, C.R. Misfearing: A reply. Harvard Law Review 119, 1110-1125 (2006).

Monday
Sep102012

Culturally polarized Australia: Cross-cultural cultural cognition, Part 3 (and a short diatribe about ugly regression outputs)

In a couple of previous posts (here & here), I have discussed the idea of "cross-cultural cultural cognition" (C4in general and in connection with data collected in the U.K. in particular. In this one, I'll give a glimpse of some cultural cognition data from Australia. 

Australia CC scalesThe data come from a survey of large, diverse general population sample. It was administered by a team of social scientists led by Steven Hatfield-Dodds, a researcher at the Australian National University. I consulted with the Hatfield-Dodds team on adaptation of the cultural cognition measures for use with Australian survey respondents.

It was a pretty easy job! Although we experimented with versions of various items from the "long form" cultural cognition battery, and with a diverse set of items distinct from those, the best performing set consisted of the two six-item sets that make up the "short form" versions of the CC scales. The items were reworded in a couple of minor ways to conform to Australian idioms.

Scale performance was pretty good. The items loaded appropriately on two distinct factors corresponding to "hierarchy-egalitarianism" and "individualism-communitarianism," which had decent scale-reliability scores. I discussed these elements of scale performance more in the first couple of posts in the  Cseries. 

 

The Hatfield-Dodds team included the CC scales in a wide-ranging survey of beliefs about and attitudes toward various aspects of climate change. Based on the results, I think it's fair to say that Australia is at least as culturally polarized as the U.S.

The complexion of the cultural division is the same there as here. People whose values are more egalitarian and communitarian tend to see the risk of climate change as high, while those whose values are more hierarchical and individualistic see it as low. This figure reflects the size of the difference as measured on a "climate change risk" scale that was formed by aggregating five separate survey items (Cronbach’s α = 0.90):

Looking at individual items helps to illustrate the meaning of this sort of division -- its magnitude, the sorts of issues it comprehends, etc.  

Asked whether they "believe in climate change," e.g., about 50% of the sample said "yes." Sounds like Australians are ambivalent, right? Well, in fact, most of them are pretty sure -- they just aren't, culturally speaking, of one mind. There's about an 80% chance that a "typical" egalitarian communitarian," e.g., will say that climate change is definitely happening; the likelihood that a hierarchical individualist will, in contrast, is closer to 20%.


There's about a 25% chance the hierarchical individualist will instead say, "NO!" in response to this same question. There's only a 1% chance that an egalitarian communitarian in Australia will give that response!

BTW, to formulate these estimates, I fit a multinomial logistic regression model to the responses for the entire sample, and then used the parameter estimates (the logit coefficients and the standard errors) to run Monte Carlo simulations for the indicated "culture types." You can think of the simulation as creating 1,000 "hierarch individualists" and 1,000 "egalitarian communitarians" and asking them what the they think. By plotting these simulated values, anyone, literally, can see, literally, the estimated means and the precision of those estimates associated with the logit model. No one -- not even someone well versed in statistics -- can see such a result like in a bare regression output like this:

Yet this sort of table is exactly the kind of uninformative reporting that most social scientists (particularly economists) use, and use exclusively.  There's no friggin' excuse, for this, either, given that public-spirited stats geniuses like Gary King have not only been lambasting this practice for years, but also producing free high-quality software like Clarify, which is what I used to run the Monte Carlo simulations here (the graphic reporting technique I used--plotting the density distributions of the simulated values to illustrate the size and precision of contrasting estimates--is something I learned from King's work too).

So don't be awed the next time someone puts a mindless table like this in a paper or on a powerpoint slide; complain!

Oh .... There are tons of cool things in the Hatfield-Dodds et al. survey, and I'm sure we'll write them all up in the near future. But for now here's one more result from the Australia Cstudy:

Around 20% of the survey respondents indicated that climate change was caused either "entirely" or "mainly" by "nature" rather than by "human activity."  But the likelihood that a typical hierarchical individualist would view climate change was around 48% (+/-, oh, 7% at 0.95 confidence, by the looks of the graphic). Only about 5% chance an egalitarian communitarian would treat humans as an unimportant contributor to climate change.

You might wonder how about 50% of the hierarch individualists one might find in Australia would likely tell you that "nature" is causing climate change when less than 25% are likely to say "yes" if you ask them whether climate change is happening.

But you really shouldn't. You see, the answers people give to individual questions on a survey on climate change aren't really answers to those questions. They are just expressions of a global pro-con attitude toward the issue. Psychometrically, the answers are observable "indicators" of a "latent" variable. As I've explained before, in these situations, it's useful to ask a bunch of different questions and aggregate them-- the resulting scale (which will be one or another way of measuring the covariance of the responses) will be a more reliable (i.e., less noisy) measure of the latent attitude than any one item.  Although if you are in a pinch -- and don't want to spend a lot of money or time asking questions -- just one item, "the industrial strength risk perception measure," will work pretty well!

The one thing you shouldn't do, though, is get all excited about responses to specific items or differences among them. Pollsters will do that because they don't really have much of a clue about psychometrics.

Hmmm... maybe I'll do another post on "pollster" fallacies -- and how fixation on particular questions, variations in the responses between them, and fluctuations in them over time mislead people on public opinion on climate change.

Sunday
Sep022012

I love Bayes -- and you can too!

No truism is nearly so elegant as, or responsible for more deep insights than, Bayes's Theorem.

I've linked to a couple of teaching tools that I use in my evidence course. One is a Bayesian calculator, which Kw Bilz at UIUC first came up with & which I've tinkered with over time.

The second is a graphic rendering of a particular Bayesian problem. I adapted it from an article by Spiegelhalter et al. in Science

In my view, the "prior odds x likelihood ratio = posterior odds" rendering of Bayes is definitely the most intuitive and tractable. It's really hard to figure out what people who use other renderings are trying to do besides frustrate their audience or make them feel dumb, at least if they are communicating with those who aren't used to manipulating abstract mathematical formuale.  As the graphic illustrates, the "odds" or "likelihood ratio" formalization, in addition to being simple, is the one that best fits with the heuristic of converting the elements of Bayes into natural frequencies, which is an empirically proven method for teaching anyone -- from elementary school children (or at least law students!) to national security intelligence analysts-- how to handle conditional probability.

If you don't get Bayes, it's not your fault.  It's the fault of whoever was using it to communicate an idea to you.

References

Sedlmeier, P. & Gigerenzer, G. Teaching Bayesian reasoning in less than two hours. Journal of Experimental Psychology: General 130, 380-400 (2001).

 

 

Wednesday
Aug292012

Even more on motivated consequentialist reasoning

Wow—super great comments on the “Motivated consequentialist reading” post.  Definitely worth checking out!

Some highlights: 

  • MW & Jason Hahn question whether I’m right to read L&D as raising doubts about Haidt & Graham’s characterization of the dispositions, particularly the “liberal” one, that generate motivated reasoning of “harms” & like consequences.
  • Peter Ditto offers a very generous and instructive response, in which indicates he thinks L&D is “perfectly consistent” with H&G but agrees that it “generally challenges” the equation of consequentialism with systematic reasoning in Greene’s distinctive & provocative dual-process theory of moral judgment.
  • A diabolical genius calling himself “Nick” asks whether the “likelihood ratio” I assigned to L&D on the “asymmetry thesis” has been contaminated by my “priors.” I answer him in a separate post.

I am persuaded, based on MW’s, Jason’s, and Peter’s various points, that I was simply overeager in reading the L&D results as offering any particular reason to question H&G’s characterization of “liberals.” (BTW, the reason I keep using quotes for “liberals” is that I think people who self-identify as “liberals” on the 5- or 7-point “liberal-conservative” survey measure are only imperfect Liberals, philosophically speaking; the ones who self-identify as “conservatives,” moreover, are also imperfect Liberals—they aren’t even close enough to being anti-liberals to be characterized as “imperfect” versions of that; we are all Liberals, we are all small “r” republicans—here…)

The basis of my doubt is that I find it unpersuasive to suggest that intuitive perceptions of “harm” unconsciously motivate liberals or anyone else to formulate conscious, confabulatory “harm-avoidance” arguments. I don’t get this conceptually; if it’s intuitive perceptions of harm that drive the conscious moral reasoning of liberals about harm, where is the motivated reasoning? Where does confabulation come in? I also think the evidence is weak for the idea that perceptions of “harm” (or “unfairness,” for that matter) are what make liberals see “harm” (or “unfairness”) is what explains "liberals'" positions, at least on issues like climate change & gun control & the HPV vaccination. I think “liberals” are motivated to see “harm” by unconscious commitments to some cultural, and essentially anti-Liberal perfectionist morality. That is, they are the same as “conservatives”  in this regard, except that the cultural understanding of “purity” that motivates "liberals" is different from the one that motivates “conservatives.”

But I concede, on reflection, that L&D don’t furnish any meaningful support for this view.

Here’s my consolation, however, for being publicly mistaken. Ditto directs me and others to the work of Kurt Gray, who Peter advises has advanced a more systematic version of the claim that everyone’s morality is “harm” based but also infused with motivated perceptions of one or another view of “purity” or the like (a position that would make Mary Douglas smile, or at least stop scowling for 10 or 15 seconds).

Well, as it turns out Gray himself wrote to me, too, off-line. He not only identified work that he & collaborators have done that engage H&G & also Greene in ways consistent with the position I am taking; he also was also very intent to furnish me with references to responses from scholars who take issue with him. So I plan to read up. And now you can too:

There are some 16 responses to the latter –from the likes of AlickeDitto, Liu & Wojcik; Graham & Iyerand Koleva & Haidt --in the Psychol. Inq. issue. Sadly, those, unlike the Gray papers, are pay-walled. :(

Wednesday
Aug292012

Doc., please level with me: is my likelihood ratio infected by my priors?!

In a previous post, I acknowledged that a very excellent study by Liu & Ditto had some findings in it that were supportive of the “asymmetry thesis”—the idea that motivated reasoning and like processes more heavily skew the factual judgments of “conservatives” than “liberals.” Still, I said that “there's just [so] much more valid & compelling evidence in support of the 'symmetry' thesis—that ideologically motivated reasoning is uniform ... across ideologies—” that I saw no reason to “substantially revise my view of the likelihood” that the asymmetry position is actually correct.

An evil genius named Nick asks:

So what (~) likelihood ratio would you ascribe to this study for the hypothesis that the asymmetry thesis does not exist? And how can we be sure that you aren't using your prior to influence that assessment? ….

You acknowledge Liu & Ditto’s findings do support the asymmetry thesis, yet you state, without much explanation, that you “don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.”

… One way to think about it is that your LR for the Liu & Ditto study as it relates to the asymmetry hypothesis should be ~ equal to the LR from a person who is completely ignorant (in an E.T. Jaynes sense) about the Cultural Cognition findings that bear on the hypothesis. It is, of course, silly to think this way, and certainly no reader of this blog would be in this position, but such ignorance would provide an ‘unbiased’ estimate of the LR associated with the study. [note that is amendable to empirical testing.]

You may have simply have been stating that your prior on the asymmetry hypothesis is so low that the LR for this study does not change your posterior very much. That is perfectly coherent but I would still be interested in what’s happening to your LR (even if its effect on the posterior is trivial).

Well, of course, readers can’t be sure that my priors (1,000:1 that the “asymmetry thesis” is false) didn’t contaminate the likelihood ratio I assigned to L&D’s finding of asymmetry in their 2nd study (0.75; resulting in revised odds that "asymmetry thesis is false" = 750:1).

Worse still, I can’t.

Obviously, to avoid confirmation bias, I must make an assessment of the LR based on grounds unrelated to my priors. That’s clear enough—although it’s surprising how often people get this wrong when they characterize instances of motivated reasoning as “perfectly consistent with Bayesianism” since a person who attaches a low prior to some hypothesis can “rationally” discount evidence to the contrary. Folks: that way of thinking is confirmation bias--of the conscious variety.

The problem is that nothing in Bayes tells me how to determine the likelihood ratio to attach to the new evidence. I have to “feed” Bayes some independent assessment of how much more consistent the new evidence is with one hypothesis than another. ("How much more consistent,” formally speaking, is “how many times more likely." In assigning an LR of 0.75 to L&D, I’m saying that it is 1.33 x more consistent with “asymmetry” than “symmetry”; and of course, I’m just picking such a number arbitrarily—I’m using Bayes heuristically here and picking numbers that help to convey my attitude about the weight of the evidence in question).

So even if I think I am using independent criteria to assess the new information, how do I know that I’m not unconsciously selecting a likelihood ratio that reflects my priors (the sort of confirmation bias that psychology usually worries about)? The question would be even more pointed in this instance if I had assigned L&D a likelihood ratio of 1.0—equally consistent with asymmetry and symmetry—because then I wouldn’t have had to revise my prior estimation in the direction of crediting asymmetry a tad more. But maybe I’m still assigning an LR to the study (only that one small aspect of it, btw) that is not as substantially below 1.0 as I should because it would just be too devestating a blow to my self-esteem to give up the view that the asymmetry thesis is false.

Nick proposes that I go out and find someone who is utterly innocent of the entire "asymmetry" issue and ask her to think about all this and get back to me with her own LR so I can compare. Sure, that’s a nice idea in theory. But where is the person willing to do this? And if she doesn’t have any knowledge of this entire issue, why should I think she knows enough to make a reliable estimate of the LR?

To try to protect myself from confirmation bias—and I really really should try if I care about forming beliefs that fit the best available evidence—I follow a different procedure but one that has the same spirit as evil Nick’s.

I spell out my reasoning in some public place & try to entice other thoughtful and reflective people to tell me what they think. If they tell me they think my LR has been contaminated in that way, or simply respond in a way that suggests as much, then I have reason to worry—not only that I’m wrong but that I may be biased.

Obviously this strategy depends (among other things) on my being able to recognize thoughtful and reflective people being thoughtful and reflective even when they disagree with me. I think I can.  Indeed, I make a point of trying to find thoughtful and reflective people with different priors all the time-- to be sure their judgment is not being influenced by confirmation bias when they assure me that my LR is “just right.”

Moreover, if I get people with a good enough mix of priors to weigh in, I can "simulate" the ideally "ignorant observer" that Nick conjures (that ignorant observer looks a lot like Maxwell's Demon, to me; the idea of doing Bayesian reasoning w/o priors would probably be a feat akin to violating the 2nd Law of Thermodynamics).

Nick the evil genius—and others who weighed in on the post to say I was wrong (not about this point but about another: whether L&D’s findings were at odds with Haidt & Graham’s account of the dispositions that motivate “liberals” and “conservatives”; I have relented and repented on that)—are helping me out in this respect!

But Nick points out that I didn’t say anything interesting about why I assigned such a modest LR to L&D on this particular point.  That itself, I think, made him anxious enough to tell me that he was concerned that I might be suffering from confirmation bias. That makes me anxious.

So, thank you, evil Nick! I will say more. Not because I really feel impelled to tussle about how much weight to assign L&D on the asymmetry point; I think and suspect they agree that it would be nice simply to have more evidence that speaks more directly to the point. But now that Nick is helping me out, I do want to say enough so that he (and any other friendly person out there) can tell me if they think that my prior has snuck through and inserted itself into my LR.

In the study in question, L&D report that subjects' “deontological” positions—that is, the positions they held on nonconsequenialist moral grounds—tended to correlate with their view of the consequences of various disputed policies (viz., “forceful interrogation,” “condom promotion” to limit STDs, “capital punishment,” and “stem cell research”).

They also found that this correlation—this tendency to conclude that what one values intrinsically just happens to correlate with the course of action that will produce the state of affairs—increases as one becomes more “conservative” (although they also found that the correlation was still significant even for self-described “liberals”). In other words, on the policies in questions, liberals were more likely to hold positions that they were willing to concede might not have desirable consequences.

Well, that’s evidence, I agree, that is more consistent with the asymmetry thesis—that conservatives are more prone to motivated reasoning—than are liberals.  But here's why I say it's not super strong evidence of that.

Imagine you and I are talking, Nick, and I say, "I think it is right to execute murderers, and in addition the death penalty deters." You say, "You know, I agree that the death penalty deters, but to me it is intrinsically wrong to execute people, so I’m against it.

I then say, "For crying out loud--let's talk about something else. I think torture can be useful in extracting information, & although it is not a good thing generally, it is morally permissible in extreme situations when there is reason to think it will save many lives. Agree?"  You reply, "Nope. I do indeed accept that torture might be effective in extracting information but it's always wrong, no matter what, even in a case in which it would save an entire city or even a civilization from annihilation."  

We go on like this through every single issue studied in the L&D study.

Now, if at that point, Nick, you say to me, "You know, you are a conservative & I’m a liberal, and based on our conversation, I'd have to say that conservatives are more prone than liberals to fit the facts to their ideology," I think I’m going to be a bit puzzled (and not just b/c of the small N).

"Didn’t you just agree with me on the facts of every policy we just discussed?" I ask. "I see we have different values; but given our agreement about the facts, what evidence is there even to suspect that my view of them  is based on anything different from what your view is based on -- presumably the most defensible assessment of the evidence?"

But suppose you say to me instead, “Say, don't you find it puzzling that you never experience any sort of moral conflict -- that what's intrinsically 'good' or 'permissible' for you, ideologically speaking, always produces good consequences? Do you think it's possible that you might be fitting your empirical judgments to your values?"  Then I think I might say, "well, that's possible, I suppose. Is there an experiment we can do to test this?"

I was thinking of experiments that do show that when I said, in my post, that the balance of the evidence is more in keeping w/ symmetry then asymmetry. Those experiments show that people who think the death penalty is intrinsically wrong tend to reject evidence that it deters -- just as people who think it's "right" tend to think that evidence it doesn't deter are unpersuasive. There are experiments, too, like the ones we've done ("Cultural Cognition of Scientific Consensus"; "They Saw a Protest"), in which we manipulate the valence of one and the same piece of evidence & find that people of opposing ideologies both opportunistically adjust the weight they assign that evidence. There are also many experiments connecting motivated reasoning to identity-protective cognition of all sorts (e.g, "They Saw a Game") -- and if identity-protective cognition is the source of ideologically motivated reasoning, too, it would be odd to find asymmetry.

So I think the L&D study-- an excellent study -- is relevant evidence & more consistent with asymmetry than symmetry. But it's not super strong evidence in that respect—and not strong enough to warrant “changing one’s mind” if one believes that the weight of the evidence otherwise is strongly in support of symmetry rather than asymmetry in motivated reasoning.

So tell, me, Dr. Nick—is my LR infected?