follow CCP

Recent blog entries
Sunday
Feb052012

Cultural consensus worth protecting: robots are cool!

Just a couple of yrs ago there was concern that artificial intelligence & robotics might become the next front for the "culture war of fact" in US.

Well, good news: Everyone loves robots! Liberals & conservatives, men & women (the latter apparently not as much, though), rich & poor, dogs & cats!

We all know that the Japanese feel this way, but now some hard evidence -- a very rigorous poll conducted by Sodahead on-line research -- that there is a universal warm and fuzzy feeling toward robots in the US too.

This is, of course, in marked contrast to the cultural polarization we see in our society over climate change, and is thus a phenomenon worthy of intense study by scholars of risk perception.

But the contrast is not merely of academic interest: the reservoir of affection for robots is a kind of national resource -- an insurance policy in case the deep political divisions over climate change persist.

If they do, then of course we will likely all die, either from the failure to stave off climate-change induced environmental catastrophe or from some unconsidered and perverse policy response to try to stave off catastrophe.

And at that point, it will be up to the artificially intelligent robots to carry on.

You might think this is a made up issue. It's not. Even now, there are misguided people trying to sow the seeds of division on AI & robots, for what perverse, evil reason one can only try to imagine.

We have learned a lot about science communication from the climate change debacle. Whether we'll be able to use it to cure the science-communication pathology afflicting deliberations over climate change is an open question.  But we can and should at least apply all the knowledge that studying this impasse has generated to avoid the spread of this disease to future science-and-technology issues. 

And I for one can't think of an emerging technology more important to insulate from this form of destructive and mindless fate than artificial intelligence & robotics!

******

 

disclaimer: I love robots!! So much!!!
Maybe that is unconsciously skewing my assessment of the issues here (I doubt it, but I did want to mention).

Friday
Feb032012

Two common (& recent) mistakes about dual process reasoning & cognitive bias

"Dual process" theories of reasoning -- which have been around for a long time in social psychology -- posit (for the sake of forming and testing hypotheses; positing for any other purpose is obnoxious) that there is an important distinction between two types of mental operations.

Very generally, one of these involves largely unconscious, intuitive reasoning and the other conscious, reflective reasoning.

Kahneman calls these "System 1" and "System 2," respectively, but as I said the distinction is of long standing, and earlier dual process theories used different labels (I myself like "heuristic" and "systematic,” the terms used by Shelley Chaiken and her collaborators; the “elaboration likelihood model” of Petty & Cacioppo uses different labels but is very similar to Chaiken’s “heuristic-systematic Model”).

Kahneman's work (including most recently his insightful and fun synthesis “Thinking Fast, Slow”) has done a lot to focus attention on dual process theory, both in scholarly research (particularly in economics, law, public policy & other fields not traditionally frequented by social psychologists) and in public discussion generally.

Still, there are recurring themes in works that use Kahneman’s framework that reflect misapprehensions that familiarity with the earlier work in dual process theorizing would have steered people away from.

I'm not saying that Kahneman — a true intellectual giant — makes these mistakes himself or that it is his fault others are making them. I'm just saying that it is the case that these mistakes get made, with depressing frequency, by those who have come to dual process theory solely through the Kahneman System 1-2 framework.

Here are two of those mistakes (there are more but these are the ones bugging me right now).

1. The association of motivated cognition with "system 1" reasoning.  

"Motivated cognition," which is enjoying a surge in interest recently (particularly in connection with disputes over climate change), refers to the conforming of various types of reasoning (and even perception) to some goal or interest extrinsic to that of reaching an accurate conclusion.  Motivated cognition is an unconscious process; people don't deliberately fit their interpretation of arguments or their search for information to their political allegiances, etc. -- this happens to them without their knowing, and often contrary to aims they consciously embrace and want to guide their thinking and acting.

The mistake is to think that because motivated cognition is unconscious, it affects only intuitive, affective, heuristic or "fast" "System 1" reasoning. That's just false. Conscious, deliberative, systematic, "slow" "System 2" can be affected be affected as well. That is, commitment to some extrinsic end or goal -- like one's connection to a cultural or political or other affinity group -- can unconsciously bias the way in which people consciously interpret and reason about arguments, empirical evidence and the like.

This was one of the things that Chaiken and her collaborators established a long time ago. Motivated systematic reasoning continues to be featured in social psychology work (including studies associated with cultural cognition) today.

One way to understand this earlier and ongoing work is that where motivated reasoning is in play, people will predictably condition the degree of effortful mental processing on its contribution to some extrinsic goal. So if relatively effortless heuristic reasoning generates the result that is congenial to the extrinsic goal or interest, one will go no further. But if it doesn't -- if the answer one arrives at from a quick, impressionistic engagement with information frustrates that goal -- then one will step up one's mental effort, employing systematic (Kahneman's "System 2") reasoning.

But employing it for the sake of getting the answer that satisfies the extrinsic goal or interest (like affirmation of one's cultural identity defining group). As a result, the use of systematic or "System 2" reasoning will thus be biased, inaccurate.

But whatever: Motivated cognition is not a form of or a consequence of "system 1" reasoning. If you had been thinking & saying that, stop. 

2.  Equation of unconscious reasoning with "irrational" or biased reasoning, and equation of conscious with rational, unbiased.

The last error is included in this one, but this one is more general.

Expositors of Kahneman tend to describe "System 1" as "error prone" and "System 2" as "reliable" etc.

This leads lots of people to think that that heuristic or unconscious reasoning processes are irrational or at least "pre-rational" substitutes for conscious "rational" reasoning. System 1 might not always be biased or always result in error but it is where biases, which, on this view, are essentially otherwise benign or even useful heuristics that take a malignant turn, occur. System 2 doesn't use heuristics -- it thinks things through deductively, algorithmically  -- and so "corrects" any bias associated with heuristic, System 1 reasoning.

Wrong. Just wrong. 

Indeed, this view is not only wrong, but just plain incoherent.

There is nothing that makes it onto the screen of "conscious" thought that wasn't (moments earlier!) unconsciously yanked out of the stream of unconscious mental phenomena. 

Accordingly, if a person's conscious processing of information is unbiased or rational, that can only be because that person's unconscious processing was working in a rational and unbiased way -- in guiding him or her to attend to relevant information, e.g., and to use the sort of conscious process of reasoning (like logical deduction) that makes proper sense of it.

But the point is: This is old news! It simply would not have occurred to anyone who learned about the dual process theory from the earlier work to think that unconscious, heuristic, perceptive or intuitive forms of cognition are where "bias" come from, and that conscious, reflective, systematic reasoning is where "unbiased" thinking lives.

The original dual process theorizing conceives of the two forms of reasoning as integrated and mutually supportive, not as discrete and hierarchical. It tries to identify how the entire system works -- and why it sometimes doesn't, which is why you get bias, which then, rather than being "corrected" by systematic (System 2) reasoning, distorts it as well (see motivated systematic reasoning, per above).

Even today, the most interesting stuff (in my view) that is being done on the contribution that unconscious processes like "affect" or emotion make to reasoning uses the integrative, mutually supportive conceptualization associated with the earlier work rather than the discrete, hierarchical conceptualization associated (maybe misassociated; I'm not talking about Kahneman himself) with System 1/2.

Ellen Peters, e.g., has done work showing that people who are high in numeracy -- and who thus posses the capacity and disposition to use systematic (System 2) reasoning -- don't draw less on affective reasoning (System 1...) when they outperform people who are low in spotting positive-return opportunities. 

On the contrary, they use more affect, and more reliably.

In effect, their unconscious affective response (positive or negative) is what tells them that a "good deal" — or a raw one — might well be at hand, thus triggering the use of the conscious thought needed to figure out what course of action will in fact conduce to the person's well-being.

People who aren't good with numbers respond to these same situations in an affectively flat way, and as a result don't bother to engage them systematically.

This is evidence that the two processes are not discrete and hierarchical but rather are integrated and mutually supportive.  Greater capacity for systematic (okay, okay, "system 2"!) reasoning over time calibrates heuristic or affective processes (system 1), which thereafter, unconsciously but reliably, turns on systematic reasoning.

So: if you had been thinking or talking as if  System 1 equaled "bias" and System 2 "unbiased, rational," please just stop now.

Indeed, to help you stop, I will use a strategy founded in the original dual process work.

As I indicated, believing that consciousness leaps into being without any contribution of unconsciousness is just incoherent. It is like believing in "spontaneous generation."  

Because the idea that System 2 reasoning can correct unconscious bias without the prior assistance of unconscious, system 1 reasoning is illogical, I propose to call this view "System 2 ab initio bias.”

The effort it will take, systematically,  to figure out why this is an appropraite thing for someone to accuse you of if you make this error will calibrate your emotions: you'll come to be a bit miffed when you see examples; and you'll develop a distinctive (heuristic) aversion to becoming someone who makes this mistakes and gets stigmatized with a humiliating label.

And voila! -- you'll be as smart (not really; but even half would be great!) as Shelly Chaiken, Ellen Peters, et al. in no time!

References:

Chaiken, S. & Maheswaran, D. Heuristic Processing Can Bias Systematic Processing - Effects of Source Credibility, Argument Ambiguity, and Task Importance on Attitude Judgment. Journal of Personality and Social Psychology 66, 460-473 (1994).

Chaiken, S. & Trope, Y. Dual-process theories in social psychology, (Guilford Press, New York, 1999).

Chen, S., Duckworth, K. & Chaiken, S. Motivated Heuristic and Systematic Processing. Psychol Inq 10, 44-49 (1999).

Dunning, E.B.a.D. See What You Want to See: Motivational Influences on Visual Perception. Journal of Personality and Social Psychology 91, 612-625 (2006).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing Under Defense Motivation. Pers Soc Psychol B 23, 84-97 (1997).

Hsee, C.K. Elastic Justification: How Unjustifiable Factors Influence Judgments. Organ Behav Hum Dec 66, 122-129 (1996).

Kahan, D.M. The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1 (2011). 

Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Kahneman, D. Thinking, fast and slow, (Farrar, Straus and Giroux, New York, 2011).

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93, 1449-1475 (2003).

Kunda, Z. The Case for Motivated Reasoning. Psychological Bulletin 108, 480-498 (1990).

Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

Peters, E., Slovic, P. & Gregory, R. The role of affect in the WTA/WTP disparity. Journal of Behavioral Decision Making 16, 309-330 (2003).

 

Tuesday
Jan312012

The Goldilocks "theory" of public opinion on climate change

We often are told that "dire news" on climate change provokes dissonance-driven resistance.

Yet many commentators who credit  this account also warn us not to raise public hopes by even engaging in research on -- much less discussion of -- the feasibility of geoengeineering. These analysts worry that any intimation that there's a technological "fix" for global warming will lull the public into a sense of false security, dissipating political resolve to clamp down on CO2 emissions.

So one might infer that what's needed is a "Goldilocks strategy" of science communication -- one that conveys neither too much alarm nor too little but instead evokes just the right mix of fear and hope to coax the democratic process into rational engagement with the facts.

Or one might infer that what's needed is a better theory--or simply a real theory--of public opinion on climate change.

Here's a possibility: individuals form perceptions of risk that reflect their cultural commitments.

Here's what that theory implies about "dire" and "hopeful" information on climate change: what impact it has will be conditional on what response -- fear or hope, reasoned consideration or dismissiveness-- best expresses the particular cultural commitments individuals happen to have.

And finally here's some evidence from an actual empirical test conducted (with both US & UK samples) to test this conjecture:  

  • When individuals are furnished with a "dire" message -- that substantial increases in CO2 emissions are essential to avert catastrophic effects for the environment and human well-being -- they don't react uniformly.  Hierarchical individualists, who have strong pro-commerce and pro-technology values, do become more dismissive of scientific evidence relating to climate change. However, egalitarian communitarians, who view commerce and industry as sources of unjust social disparities, react to the same information by crediting that evidence even more forcefully.
     
  • Likewise, individuals don't react uniformly when furnished "hopeful" information about the contribution that geoengineering might make to mitigating the consequences of climate change. Egalitarian communitarians — the ones who ordinarily are most worried — do become less inclined to credit scientific information that climate change is such a serious problem after all. But when given the same information about geoengineering, the normally skeptical hierarchical individualists respond by crediting such scientific information more.

Am I saying that this account is conclusively established & unassailably right, that everything else one might say in addition or instead is wrong, and that therefore this, that, or the other thing ineluctably follows about what to do and how to do it? No, at least not at the moment.

The only point, for now, is about Goldilocks. When you see her, watch out.

Decision science has supplied us with a rich inventory of mechanisms. Afforded complete freedom to pick and choose among them,  any analyst with even a modicum of imagination can explain pretty much any observed pattern in risk perception however he or she chooses and thus invest whatever communication strategy strikes his or her fancy with a patina of "empirical" support.

One of the ways to prevent being taken in by this type of faux explanation is to be very skeptical about Goldilocks. Her appearance -- the need to engage in ad hoc "fine tuning" to fit a theory to seemingly disparate observations -- is usually a sign that someone doesn't actually have a valid theory and is instead abusing decision science by mining it for tropes to construct just-so stories motivated (consciously or otherwise) by some extrinsic commitment.

The account I gave of how members of the public react to information about climate change risks didn't involve adjusting one dial up and another down to try to account for multiple off-setting effects.

That's because it showed there really aren't offsetting effects here. There's only one: the crediting of  information in proportion to its congeniality to cultural predispositions. 

The account is open to empirical challenge, certainly.  But that's exactly the problem with Goldilocks theorizing: with it anything can be explained, and thus no conclusion deduced from it can be refuted.

 

Monday
Jan302012

More politics, pepper spray & cognition 

Over at Volokh Conspiracy there's an interactive poll that lets readers watch a video of the police tasing a D.C. Occupy protestor & then indicate whether the police were acting appropriately. The comments are great demonstration of how people with different ideological predispositions will actually see different things in a situation like this, a recurring phenomenon in the reactions to use of force by police against Occupy protestors. I'm pretty sure the author of the post -- Orin Kerr, who'd be a refutation of the phenomenon of ideologically motivated reasoning if he weren't a mere "N of 1"-- designed the post to make readers see that with their own eyes regardless of what they "saw with their own eyes" in the video. Nice.

Friday
Jan272012

Hey, again, Chris Mooney...

Hi, Chris.

Your response was very thoughtful -- and educational! the connection to Haidt's moral psychology research added an important dimension -- as always. Thanks!

As you can see, in "Hey Chris Mooney ...," I didn't actually have in mind the project to advance the science of science communication.

I also didn't -- don't -- have in mind the "framing of science" as a communication strategy aimed at promoting support for enlightened policies, better democratic deliberations, etc., as valuable as those things might be.

I have in mind the idea that enjoyment of the wonder, as well as the wisdom, of scientific knowledge should be viewed as a good that a Liberal society enables all its citizens readily to enjoy without regard to their moral or cultural or ideological or religious orientations.

I think our Liberal society isn't doing this as well as it should. 

I'm pretty sure that it is a lot easier to build into one's life the thrill of seeing our species resolve the mysteries of nature (inevitably revealing even more astonishing mysteries) if one has a particular set of cultural commitments (ones I have, in fact) than if one has a very different set.  

The reason, in my view, is not that there is something antagonistic to science in the latter set of commitments.

Rather, it is that the content of the information that science communicators are conveying (with tremendous craft; some people are happy to be alive in the age of the microwave oven or on-demand movies; I am glad to be here when it possible to get continuous streams of great science reporting from sources like ScienceNowNot Exactly Rocket ScienceDot Earth, etc.) tends to be embedded in cultural meanings that fit one outlook much better than another. 

That's why I mentioned the "hypothetical citizen" (who is not hypothetical) who wants science to show him or her all the miraculous devices in God's workshop. He or she gets just as much of a thrill in getting to know something about how much our species knows as I do, but doesn't get to experience it nearly as readily or as easily. 

And that bothers me. It bothers me a bit because it might well be contributing to the pathology that is attacking the discussion of climate change in our society. But more, it just bothers me because I think that that's just not the way things should be in a good society.
 

For sure, the science of science communication is a source of insight on how to deal with this problem.

But if the Liberal Republic of Science is suffering from this sort of imperfection (I truly think it is; do you feel otherwise?), then it is science journalists and related professionals (e.g., science documentary producers) who will have to remedy it -- by including attention to this goal in their shared sense of mission, and by using all the knowledge they can gather from all sources (including their own practical experimentation) to carry it out.

Thursday
Jan262012

Hey, Chris Mooney ... (or the Liberal Republic of Science project)

Hi, Chris.

You've been telling us a lot recently about the differences in how "liberals" and "conservatives" think (and admitting, very candidly and informatively, that whether they really do and what significance that might have are complicated and unresolved issues). You have a book coming out, The Republican Brain. I look forward to reading it. I really do.

But I have a question I want to ask you. Or really, I have a thought, a feeling, that I want to share, and get your reaction to.

Imagine someone (someone very different from you; very different from me)-- a conservative Republican, as it turns out--who says: "Science is so cool -- it shows us the amazing things God has constructed in his cosmic workshop!"

Forget what percentage of the people with his or her cultural outlooks (or ideology) feel the way that this particular individual does about science (likely it is not large; but likely the percentage of those with a very different outlook -- more secular, egalitarian, liberal -- who have this passionate curiosity to know how nature works is small too. Most of my friends don't--hey, to each his own, we Liberals say!).

My question is do you (& not just you, Chris Mooney; we--people who share our cultural outlooks, worldview, "ideology") know how to talk to this person? Talk to him or her about climate change, or about whether his or her daughter should get the HPV vaccine? Or even about, say, how chlorophyll makes use of quantum mechanical dynamics to convert sunlight into energy? I think what "God did in his/her workshop" there would blow this person's mind (blows mine).

Like I said, I look forward to reading The Republican Brain.

But there's another project out there -- let's call it the Liberal Republic of Science Project -- that is concerned to figure out how to make both the wisdom and the wonder of science as available, understandable, and simply enjoyable to citizens of all cultural outlooks (or ideological "brain types") as possible.

The project isn't doing so well. It desperately needs the assistance of people who are really talented in communicating science to the public.

I think it deserves that assistance.  

Wouldn't you agree?

Thursday
Jan262012

Efforts at promoting healthier diet undermined by mixed messaging?

Forks Over Knives is one of several recent films concerned with the so called 'obesity epidemic' and urging dietary reform. (See also Killer At Large; Food, Inc.; Planeat). These films are attempting to convey an important message, however, I am concerned that their persuasive tactics – namely, condemning national industry and linking obesity to global warming – run the risk of culturally polarizing healthier eating, a seemingly secular, universally appealing value. The films start out with important, on point information: establishing the ‘obesity epidemic’ as a significant public health issue: one third of adults and 17% of children are obese, one third of children are overweight, resulting in high blood pressure, high cholesterol, and early onset diabetes. Obesity-associated high cholesterol, diabetes, cardiovascular disease and stroke (two of the leading causes of death) contributes significantly to the U.S.’ extraordinarily high per capita cost of health care, according to CDC.  The films then present evidence that diets high in cholesterol from animal products, saturated fat, and sugar likely cause obesity and associated health risks, and suggest dietary reform.

But instead of staying on this narrow message – eat healthier to avoid these health risks – they take the argument further. Here’s where they risk undermining receptiveness to their main message by unnecessarily making two culturally polarizing arguments: (a) they take a strong anti-industry bent – urging we repudiate the exploitative national food industry (and switch to local farming, or raw vegan diets, etc.), and (b) they link obesity to global warming. The films argue: ‘Not only should you reform diet to promote your own health, but you should change your diet in order to thwart the exploitative national food industry and save the planet from global warming.’ These films are not alone in connecting obesity to global warming. (See also, e.g., CNN; ABC; U.K. medical journal The Lancet; and Nature, Global Warming: Is Weight Loss a Solution?); one recent article even uses the tagline “obesity is the new global warming.” 

By infusing messages about healthier diet with demands to repudiate the national food industry and threats of global warming, these films seem to unnecessarily tie healthy eating to culturally polarizing issues. The call for healthier dieting urges reduced consumption of beef and dairy products – a deeply rooted American industrial and cultural tradition. This threat to beef and dairy, when joined with arguments to revolutionize the national food industry and stop global warming, unnecessarily implicates and threatens the entire traditional American industrial way of life (meat & potatoes) associated with dominance and masculinity – trucks, farms, factories, steaks and burgers.  It seems that this connection – reform your diet in order to stop exploitative national industry and avert global warming – might make the idea of dietary reform particularly threatening to hierarchical values. This might induce biased processing, or cause some audience members to discredit (out of cultural defensiveness) evidence on the risks of over-consumption of animal product cholesterol, saturated fat, and sugar. Thus, generating culturally protective resistance to dietary reform that promotes the seemingly secular, universal values of health and longevity. One commentator writing about Forks Over Knives, otherwise receptive to film’s message about dietary reform, captures this problem: “[T]he documentary just may be the Inconvenient Truth of the digestive system… My problem with the documentary is where it crosses into puritanical proselytizing about the value of a vegan lifestyle. Here food becomes something unappetizingly pragmatic, and elements of what eating means to a society – from cultural to religious to familial – are downplayed.”

There has been great resistance from parents to improving school lunch programs, loaded with fatty, high cholesterol, and sugary ingredients that have been linked to obesity and associated health problems.   Resistance persists even when the schools are shown they can produce healthy lunches for the same cost, without much structural change. Certainly, there is institutional and industry resistance to change, but I wonder whether part of parental resistance (i.e., parents insisting that french fries be served at least three times a week) is a defensive response to dietary reform perceived as a cultural threat? Messages aiming to encourage healthier eating should be cautious to avoid the implication that healthier dieting requires rejecting an entire lifestyle as American as, well, McDonalds drive thru windows and apple pie.

Wednesday
Jan252012

Is cultural cognition a bummer? Part 2

This is the second of two posts addressing “too pessimistic, so wrong”: the proposition that findings relating to cultural cognition should be resisted because they imply that it’s “futile” to reason with people.

In part one, I showed that “too pessimistic, so wrong”—in addition to being simultaneously fallacious and self-refuting (that’s actually pretty cool, if you think about it)—reflects a truncated familiarity with cultural cognition research. Studies of cultural cognition examine not only how it can interfere with open-minded consideration of scientific information but also what can be done to counteract this effect and generate open-minded evaluation of evidence that is critical of one’s existing beliefs.

Now I’ll identify another thing that “too pessimistic, so wrong” doesn't get: the contours of the contemporary normative and political debate over risk regulation and democracy.

2.  "Too pessimistic, so wrong" is innocent of the real debate about reason and risk regulation.

Those who make the “too pessimistic, must be wrong” argument are partisans of reason (nothing wrong with that). But ironically, by “refusing to accept” cultural cognition, these commentators are actually throwing away one of the few psychologically realistic programs for harmonizing self-government with scientifically enlightened regulation of risk.

The dominant view of risk regulation in social psychology, behavioral economics, and legal scholarship asserts that members of the public are too irrational to figure out what dangers society faces and how effectively to abate them. They don't know enough science; they have to use emotional heuristic substitutes for technical reasoning. They are dumb, dumb, dumb.

Well, if that is right, democracy is sunk. We can't make the median citizen into a climate scientist or a nuclear physicist. So either we govern ourselves and die from our stupidity; or, as many influential commentators in the academy (one day) and government (the next) argue, we hand over power to super smart politically insulated experts to protect us from myriad dangers.

Cultural cognition is an alternative to this position. It suggests a different diagnosis of the science communication crisis, and also a feasible cure that makes enlightened self-government a psychologically realistic prospect.

Cultural cognition implies that political conflicts over policy-relevant science occur when the questions of fact to which that evidence speaks become infused with antagonistic cultural meanings.

This is a pathological state—both in the sense that it is inimical to societal well-being and in the sense that it is unusual, not the norm, rare.  

The problem, according to the cultural cognition diagnosis, is not that people lack reason. It is that the reasoning capacity that normally helps them to converge on the best available information at society’s disposal is being disabled by a distinctive pathology in science communication.

The number of scientific insights that make our lives better and that don’t culturally polarize us is orders of magnitude greater than the ones that do. There’s not a “culture war” over going to doctors when we are sick and following their advice to take antibiotics when they figure out we have infections. Individualists aren’t throttling egalitarians over whether it makes sense to pasteurize milk or whether high-voltage power lines are causing children to die of leukemia.

People (the vast majority of them) form the right beliefs on these and countless issues, moreover, not because they “understand the science” involved but because they are enmeshed in networks of trust and authority that certify whom to believe about what.

For sure, people with different cultural identities don’t rely on the same certification networks. But in the vast run of cases, those distinct cultural certifiers do converge on the best available information. Cultural communities that didn’t possess mechanisms for enabling their members to recognize the best information—ones that consistently made them distrust those who do know something about how the world works and trust those who don’t—just wouldn’t last very long: their adherents would end up dead.

Rational democratic deliberations about policy-relevant science, then, doesn’t require that people become experts on risk. It requires only that our society take the steps necessary to protect its science communication environment from a distinctive pathology that enfeebles ordinary citizens from using their (ordinarily) reliable ability to discern what it is that experts know.

“Only” that? But how?

Well, that’s something cultural cognition addresses too — in the studies that “too pessimistic, so wrong” ignores and that I described in part one.

Don’t get me wrong: the program to devise strategies for protecting the science communication enviornment has a long way to go.

But we won’t even make one step toward perfecting the science of science communication if we resolve to “resist” evidence because we find its implications to be a bummer.

Reference: 

 

 

Saturday
Jan212012

R^2 ("r squared") envy

Am at a conference & a (perfectly nice & really smart) guy in the audience warns everyone not to take social psychology data on risk perception too seriously: "some of the studies have R2's of only 0.15...."

Oy.... Where to start? Well how about with this: the R2 for viagra effectiveness versus placebo ... 0.14!

R2 is the "percentage of the variance explained" by a statistical model. I'm sure this guy at the conference knew what he was talking about, but arguments about whether a study's R2 is "big enough" are an annoying, and annoyingly common, distraction. 

Remarkably, the mistake -- the conceptual misundersandings, really -- associated with R2 fixation were articulated very clearly and authoritatively decades ago, by scholars who were then or who have become since giants in the field of empirical methods: 

I'll summarize the nub of the mistake asssociated with R2 fixation but it is worth noting that the durability of it suggests more than a lack of information is at work; there's some sort of congeniality between R2 fixation and a way of seeing the world or doing research or defending turf or dealing with anxiety/inferiority complexs or something... Be interesting for someone to figure out what's going on.

But anyway, two points:

1.  R2 is an effect size measure, not a grade on an exam with a top score of 100%. We see a world that is filled with seeming randomness. Any time you make it less random -- make part of it explainable to some appreciable extent by identifying some systematic process inside it -- good! R2 is one way of characterizing how big a chunk of randomness you have vanquished (or have if your model is otherwise valid, something that the size of R2 has nothing to do with). But the difference between it & 1.0 is neither here nor there-- or in any case, it has nothing to do with whether you in fact know something or how important what you know is.

2. The "how important what you know is" question is related to R2 but the relationship is not revealed by subtracting Rfrom 1.0. Indeed, there is no abstract formula for figuring out "how big" R2 has to be before the effect it mesaures is important. Has extracting that much order from randomness done anything to help you with the goal that motivated you to collect data in the first place? The answer to that question is always contextual. But in many contexts, "a little is a lot," as Abelson says. Hey: if you can remove 14% of the variance in sexual performance/enjoyment of men by giving them viagra, that is a very practical effect! Got a headache? Take some ibuprofen (R2 = 0.02).

What about in a social psychology study? Well, in our experimental examination of how cultural cognition shaped perceptions of the behavior of political protestors, the Rfor the statistical analysis was 0.19. To see the practical importance of an effect size that big in this context, one can compare the percentage of subjects identified by one or another set of cultural values who saw "shoving," "blocking," etc., across the experimental conditions.

If, say, 75% of egalitarian individualists in the abortion-clinic condition but only 33% of them in the military-recruitment center condition thought the protestors were physically intimidating pedestrians; and if only 25% of hierarchical communitarians in the abortion-clinic but 60% of them in the recruitment-center condition saw a protestor "screaming in the face" of a pedestrian--is my 0.19 R2 big enough to matter? I think so; how about you?

There are cases, too, where a "lot" is pretty useless -- indeed, models that have notably high R2's are often filled with predictors the effects of which are completely untheorized and that add nothing to our knowledge of how the world works or of how to make it work better.

Bottom line: It's not how big your R2 is; it's what you (and others) can do with it that counts! 

reference: Meyer, G.J., et al. Psychological testing and psychological assessment: A review of evidence and issues. Am Psychol 56, 128-165 (2001).

 

Friday
Jan202012

Is cultural cognition a bummer? Part 1

Now & again I encounter the claim (often in lecture Q&A, but sometimes in print) that cultural cognition is wrong because it is too pessimistic. Basically, the argument goes like this:

Cultural cognition holds that individuals fit their risk perceptions to their group identities. That implies it is impossible to persuade anybody to change their minds on climate change and other issues—that even trying to reason with people is futile. I refuse to accept such a bleak picture. Instead, I think the real problem is [fill in blank—usually things like “science illiteracy,” “failure of scientists to admit uncertainty,” “bad science journalism,” “special interests distorting the truth”]

What’s wrong here?

Well, to start, there’s the self-imploding logical fallacy. It is a non sequitur to argue that because one doesn’t like the consequences of some empirical finding it must be wrong. And if what someone doesn’t like—and therefore insists “can’t be right”— is empirical research demonstrating the impact of a species of motivated reasoning, that just helps to prove the truth of exactly what such a person is denying.

Less amusingly and more disappointingly, the “too pessimistic, must be wrong“ fallacy suggests that the person responding this way is missing the bigger picture. In fact, he or she is missing two bigger pictures:

  • First, the “too pessimistic, so wrong” fallacy is looking only at half the empirical evidence: studies of cultural cognition show not only which communication strategies fail and why but also which ones avoid the identified mistake and thus work better.
     
  • Second, the “too pessimistic, so wrong” fallacy doesn’t recognize where cultural cognition fits into a larger debate about risk, rationality, and self-government. In fact, cultural cognition is an alternative—arguably the only psychologically realistic one—to an influential theory of risk perception that explicitly does assert the impossibility of reasoned democratic deliberation about the dangers we face and how to mitigate them.

I’m going to develop these points over the course of two posts.

  1. Cultural cognition theory doesn’t deny the possibility of reasoned engagement with evidence; it identifies how to remove a major impediment to it.

People have a stake in protecting the social status of their cultural groups and their own standing in them. As a result, they defensively resist—close their minds to consideration of—evidence of risk that is presented in a way that threatens their groups’ defining commitments.

But this process can be reversed. When information is presented in a way that affirms rather than threatens their group identities, people will engage open-mindedly with evidence that challenges their existing beliefs on issues associated with their cultural groups.

Not only have I and other cultural cognition researchers made this point (over & over; every time, in fact, we turn to normative implications of our work), we’ve presented empirical evidence to back it up.

Consider:

Identity-affirmative & narrative framing. The basic idea here is that if you want someone to consider the evidence that there's a problem, show the person that there are solutions that resonate with his or her cultural values.

E.g., Individualists values markets, commerce, and private orderings. They are thus motivated to resist information about climate change because they perceive (unconsciously) that such information, if credited, will warrant restrictions on commerce and industry.

But individualists love technology. For example, they are among the tiny fraction of the US population that knows what nanotechnology is, and when they learn about it they instantly think it's benefits are high & risks low. (When egalitarian communitarians—who readily credit climate change science— learn about nanotechnology, in contrast,  they instantly think its risks outweigh benefits; they adopt the same posture toward it that they adopt toward nuclear power. An aside, but only someone looking at half the picture could conclude that any position on climate change correlates with being either “pro-“ or “anti-science” generally).

So one way to make individualists react more open-mindedly to climate change science is to make it clear to them that more technology—and not just restrictions on it-- are among the potential responses to climate change risks. In one study, e.g., we found that individualists are more likely to credit information of the sort that appeared in the first IPCC report when they are told that greater use of nuclear power is one way to reduce reliance on green-house gas-emitting carbon fuel sources.

More recently, in a study we conducted on both US & UK samples, we found that making people aware of geoengineering as a possible solution to climate change reduced cultural polarization over the validity of scientific evidence on the consequences of climate change. The individuals whose values disposed them to dismiss a study showing that CO2 emissions dissipate much more slowly than previously thought became more willing to credit it when they had been given information about geoengineering & not just emission controls as a solution.

These are identity-affirmation framing experiments. But the idea of narrative is at work in this too. Michael Jones has done research on use of "narrative framing" -- basically, embedding information in culturally congenial narratives -- as a way to ease culturally motivated defensive resistance to climate change science. Great stuff.

Well, one compelling individualist narrative features the use of human ingenuity to help offset environmental limits on growth, wealth production, markets & the like. Only dumb species crash when they hit the top of Malthus's curve; smart humans, history shows, shift the curve.

That's the cultural meaning of both nuclear power and geoengineering. The contribution they might make to mitigating climate change risks makes it possible to embed evidence that climate change is happening and is dangerous in a story that affirms rather than threatens individualists’ values. Hey—if you really want to get them to perk their ears up, how about some really cool nanotechnology geoengieneering?

Identity vouching. If you want to get people to give open-minded consideration to evidence that threatens their values, it also helps to find a communicator who they recognize shares their outlook on life.

For evidence, consider a study we did on HPV-vaccine risk perceptions. In it we found that individuals with competing values have opposing cultural predispositions on this issue. When such people are shown scientific information on HPV-vaccine risks and benefits, moreover, they tend to become even more polarized as a result of their biased assessments of it.

But we also found that when the information is attributed to debating experts, the position people take depends heavily on the fit between their own values and the ones they perceive the experts to have.

This dynamic can aggravate polarization when people are bombarded with images that reinforce the view that the position they are predisposed to accept is espoused by experts who share their identities and denied by ones who hold opposing ones (consider climate change).

But it can also mitigate polarization: when individuals see evidence they are predisposed to reject being presented by someone whose values they perceive they share, they listen attentively to that evidence and are more likely to form views that are in accord with it.

Look: people aren’t stupid. They know they can’t resolve difficult empirical issues (on climate change, on HPV-vaccine risks, on nuclear power, on gun control, etc.) on their own, so they do the smart thing: they seek out the views of experts whom they trust to help them figure out what the evidence is. But the experts they are most likely to trust, not surprisingly, are the ones who share their values.

What makes me feel bleak about the prospects of reason isn’t anything we find in our studies; it is how often risk communicators fail to recruit culturally diverse messengers when they are trying to communicate sound science.

I refuse to accept that they can’t do better!

Part 2 here.

References:

Jones, M.D. & McBeth, M.K. A Narrative Policy Framework: Clear Enough to Be Wrong? Policy Studies Journal 38, 329-353 (2010).

Kahan, D. (2010). Fixing the Communications Failure. Nature, 463, 296-297.

Kahan, D. M., Braman, D., Cohen, G. L., Gastil, J., & Slovic, P. (2010). Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. L. & Human Behavior, 34, 501-16.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4, 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92, Jan. 9, 2012.

Sherman, D.K. & Cohen, G.L. Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science 11, 119-123 (2002).

Sherman, D.K. & Cohen, G.L. The psychology of self-defense: Self-affirmation theory. in Advances in Experimental Social Psychology, Vol. 38 (ed. Zanna, M.P.) 183-242 (2006).

 

Saturday
Jan142012

Handbook of Risk Theory

Really really great anthology:

Roeser, S., Hillerbrand, R., Sandin, P. & Peterson, M. Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, (Springer London, Limited, 2012).

 Its edited by Sabine Roeser, who herself has done great work to integrate empirical study of emotion and risk with a sophisticated philosophical appreciation of their significance.  

Too bad the set costs so darn much! Guess Springer figures only university libraries will want to buy it (wrong!), but even they aren't made of cash!

Wednesday
Jan112012

Answer to Andy Revkin about Murray Gell-mann

Andy Revkin did a cool interview of Nobel Prize winning physicist Murray Gell-man, who thinks people are dumb b/c they don't get climate change.

Andy's post asks (in title): Can Better Communication of Climate Science Cut Climate Risks?

My response to Andy's question:

Answer is no & yes.

No, if "better communication of science" means simply improving how the content of sound scientific information is specified & transmitted.

Yes, if "better communication" means creating a deliberative environment that is protected from the culturally partisan cues that have poisoned the discussion of climate change.

Consider:

1. the most science literate citizens in the U.S. are the most culturally divided on climate change; and

2. a dude who hasn't finished high  school is 50% likely to answer "yes" if asked whether antibiotics kill viruses (NSF science literacy questeion) but has no problem whatsoever figuring out that he should go to a Dr. when he has strep throat & take the pills that she prescribes for him.

People are really super amazingly good at figuring out who the experts are and following their advice. That skill doesn't depend on their having expert knowledge or having that knowledge "communicated" to them in terms that would allow them to get the science. But it can't work in a toxic communication environment.

 Corollaries:

 1. The climate change problem doesn't have anything to do with how scientists communicate. It has everything to do with how cultural elites talk about science.

2. It doesn't matter that Gell-man is innocent of the science of science communication. It is a mistake to think that that has anything to do with the problem. It would be nice if he understood the science of science communication in  the same way that it would nice for citizens to know the science behind antibiotics: it's intrinsically interesting but not essential to what they do-- as long as they follow the relevant experts' advice when they are sick, aren't doing quantum physics, etc.

 

 p.s. Can you please interview Freeman Dyson, too?

Wednesday
Jan112012

Cultural cognitive reality monitoring

My Yale colleague Marcia Johnson in psych dept has written some really cool papers on "cultural reality monitoring" (abstracts & links below). The basic idea is that institutions perform for members of a group a cognitive certification/validation role with respect to perceptions, belief, memories and like mental phenomena much akin to the certification/validation role that certain parts of the brain play for an individual. There's an element of analogy here, but also an element of identity: the cognitive processes that individuals use to "monitor reality" are in fact oriented by the functioning of the institutions.

There are a lot of parallels between Johnson's work and Mary Douglas's. But unlike Douglas (see How Institutions Think, in particular), Johnson is trying to cash out the idea of "what we see is who we are" with a set of individual-level psychological mechanisms, not a "functionalist" theory that sees collectives as agents.

By "psychologizing" cultural theory (here I'm scripting Johnson into a role that she doesn't explicitly present herself as filling; but I am pretty sure she wouldn't object!), Johnson does something very helpful for it: she supplies cultural theory with some creditable behavioral mechanisms, ones that hang together conceptually, have points of contact with a wide variety of (to some extent parallel, and to some extent competing) empirical programs in the social sciences, and are suggestive of and amenable to lots of meaningful empirical testing.

At the same time, by "culturizing" psychology, Johnson does something very useful for it: she furnishes it with a plausible (and again testable) account of the source of individual differences, one that explains how the single set of mechanisms known to psychology can generate systematic divergence between members of different social groups. (It's a lot more complicated, I'm afraid, than "slow" & "fast" ....)

Johnson's work thus helps to bridge Douglas's cultural theory of risk and Slovic's psychometric one, the two major theories of risk perception of the 20th & 21st centuries.

Johnson, M.K. Individual and Cultural Reality Monitoring. The ANNALS of the American Academy of Political and Social Science 560, 179-193 (1998)

What is the relationship between our perceptions, memories, knowledge, beliefs, and expectations, on one hand, and reality, on the other? Studies of individual cognition show that distortions may occur as a by-product of normal reality-monitoring processes. Characterizing the conditions that increase and decrease such distortions has implications for understanding, for example, the nature of autobiographical memory, the potential suggestibility of child and adult eyewitnesses, and recent controversies about the recovery of repressed memories. Confabulations and delusions associated with brain damage, along with data from neuroimaging studies, indicate that the frontal regions of the brain are critical in normal reality monitoring. The author argues that reality monitoring is fundamental not only to individual cognition but also to social/cultural cognition. Social/cultural reality monitoring depends on institutions, such as the press and the courts, that function as our cultural frontal lobes. Where does normal social/cultural error in reality monitoring end and social/cultural pathology begin?

 

Johnson, M.K. Reality monitoring and the media. Applied Cognitive Psychology 21, 981-993 (2007).

The study of reality monitoring is concerned with the factors and processes that influence the veridicality of memories and knowledge, and the reasonableness of beliefs. In thinking about the mass media and reality monitoring, there are intriguing and challenging issues at multiple levels of analysis. At the individual level, we can ask how the media influence individuals' memories, knowledge and beliefs, and what determines whether individuals are able to identify and mitigate or benefit from the media's effects. At the institutional level, we can ask about the factors that determine the veridicality of the information presented, for example, the institutional procedures and criteria used for assessing and controlling the quality of the products produced. At the inter-institutional level we can consider the role that the media play in monitoring the products and actions of other institutions (e.g. government) and, in turn, how other institutions monitor the media. Interaction across these levels is also important, for example, how does individuals' trust in, or cynicism about, the media's institutional reality monitoring mechanisms affect how individuals process the media and, in turn, how the media engages in intra- and inter-institutional reality monitoring. The media are interesting not only as an important source of individuals' cognitions and emotions, but for the key role the media play in a critical web of social/cultural reality monitoring mechanisms.

 

Tuesday
Jan102012

More on ideological symmetry of motivated reasoning (but is that really what's important?)

I have posted a couple times (here & here) on the "symmetry" question -- whether dynamics of motivated reasoning generate biased information processing uniformly (more or less) across cultural or ideological styles or are instead confined to one (conservativism or hierarchy-individualism), as proponents of the "asymmetry thesis" argue.

Chris Mooney has applied himself to the symmetry question with incredible intensity and has an important book coming out that marshals all the evidence he can find (on both sides) and concludes  that the asymmetry thesis is right. But Mooney now has concluded that he sees the latest CCP study on "geoengineering and the science communication environment" as evidence against his position (not a reason to abandon it, of course; that's now how science works-- one simply adds what one determines to be valid study findings to the appropriate side of the scale, which continues to weigh the competing considerations in perpetuity).

Mooney's assessment -- and his public announcement of it -- speak well of his own open-mindedness and ability to check the influence of his own ideological commitments on his assessments of evidence. But still, I think he has far less reason than he makes out to be disappointed by our results.

In our study, we tested the hypothesis that exposing subjects (from US & UK) to information on geoengineering would reduce cultural polarization over the validity of a climate change study (one that was in fact based on real studies published in Nature and PNAS).  

We predicted that polarization would be reduced among such subjects relative to ones exposed to a frame that emphasized stricter carbon-emission controls. Restricting emissions accentuates the conflicting cultural resonances of climate change, which gratify the egalitarian communitarian hostility to commerce & industry and threaten hierarchical individualist commitment to the same. Geoengineering, in contrast, offers a solution that affirms the latter's pro-technology sensibilities and thus mitigates defensive pressure on them to resist considering evidence that climate change is happening & is a serious risk.  

The experiment corroborated the hypothesis: in the geoengineering group, cultural polarization was significantly less than in the emission-control group.

The reason that Mooney sees this result as evidence against the "asymmetry" thesis is that assignment to the geoengineering condition in the experiment affected the views of both egalitarian communitarians and hierarchical individualists. The latter viewed the study as more valid than their counterparts and the latter less than their counterparts in the emission-control condition. In other words, there was less polarization because both groups moved toward the mean -- not because hierarchical individualists alone moderated their views.

Okay. I guess that's right. But for reasons stated in one of my earlier posts, I don't think that the study really adds much weight to either side of the scale being used to evaluate the symmetry question. 

As I explained, to test the asymmetry thesis, studies need to be carefully designed to reflect the various competing theories that give us reason to expect either symmetry or asymmetry in motivated reasoning.  Those sorts of studies (if the studies are designed properly) will yield evidence that is unambiguously consistent with one inference (symmetry) or the other (asymmetry).

Our study wasn't designed to do that; it was designed to test a theory that predicted that appropriately crafting the cultural meaning conveyed by sound science could mitigate cultural polarization over it. The study generated evidence in support of that theory. But because the design didn't reflect competing predictions about how the effect of the experimental treatment would be distributed across the range of our culture measures, the way that the effect happened to be distributed (more or less uniformly) doesn't rule out the possibility that there really is an important asymmetry in motivated reasoning.

I think the same is true, moreover, for the vast majority of studies on ideology and motivated reasoning (maybe all; but Mooney, who has done an exhaustive survey, no doubt knows better than I if this is so): their designs aren’t really geared to generating results that would unambiguously support only one inference in the asymmetry debate.

In the case of our (CCP) studies, at least, there's a reason for this: we don't really see "who is more biased" to be the point of studying these processes. 

Rather, the point is to understand why democratic deliberations over policy-relevant science sometimes (not always!) generate cultural division and what can be done to mitigate this state of affairs, which is clearly inimical, in itself, to the interest of a democratic society in making the best use it can of the best available evidence on how to promote its citizens' wellbeing.

That was the point of the geoengineering study. What it showed -- much more clearly than anything that bears on the ideological symmetry of motivated reasoning -- is that there are ways to improve the quality of the science communication environment so that citizens of diverse values are less likely to end up impelled in opposing directions when they consider common evidence.

For reasons I have stated, I am in fact skeptical about the asymmetry thesis. Of course, I'm open to whatever the evidence might show, and am eager in particular to consider carefully the case Mooney makes in his forthcoming book.

But at the end of the day, I myself am much more interested in the question of how to improve the quality of science communication in democracy.  When there is evidence that appears to speak to that question, then I think it is more important to figure out exactly what answer it is giving, and how much weight we should afford it, than to try to figure out what it might have to say about "who is more biased."

 

Monday
Jan092012

New CCP geoengineering study

New study/paper, hot off the press:

 

Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment

Abstract
We conducted a two-nation study (United States, n = 1500; England, n = 1500) to test a novel theory of science communication. The cultural cognition thesis posits that individuals make extensive reliance on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests the potential value of a distinctive two-channel science communication strategy that combines information content (“Channel 1”) with cultural meanings (“Channel 2”) selected to promote open-minded assessment of information across diverse communities. In the study, scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. Consistent with the study hypotheses, we found that making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions helps to offset cultural polarization over the validity of climate-change science. We also tested the hypothesis, derived from competing models of science communication, that exposure to information on geoengineering would provoke discounting of climate-change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.

Thursday
Jan052012

much scarier than nanotechnology

someone should warn people -- maybe with a contest for an appropriate X-free zone logo.

 

Wednesday
Jan042012

question on feedback between cultural affinity & credibility

John Timmer writes:


Greetings -
I've read a number of your papers regarding how people's cultural biases influence their perception of expertise.  I was wondering if you were aware of any research on the converse of this process – where people read material from a single expert and, in the absence of any further evidence, infer their cultural affinities. I'm intrigued by the prospect of a self-reinforcing cycle, where readers infer cultural affinity based on objective information (i.e., acceptance of the science of climate change), and then interpret further writing through that inferred affinity.
Any information or thoughts you could provide on this topic would be appreciated.
Thanks,
John

Am hoping others might have better answer than me-- if so, please post them! -- but here is what I said:

Hi, John. Interesting. Don't know of any.

Some conjectures:
a. I would die of shock if there weren't a good number of studies out there, particularly in political science, looking at how position-taking creates a kind of credibility aura or spillover or persuasiveness capital etc -- & how about how durable it is.
b. There is probably some stuff out there on how citizens simultaneously update their beliefs when they get expert opinions & update their views on experts' knowledge & credibility as they get information from those experts that contradicts their beliefs. Pretty tricky to figure out the right way to do that even from a "rational decisionmaking" point of view! 
I wish I could say, oh, "read this this & this" -- but I haven't seen these things specifically or if I have I didn't make note of them. But there's so much stuff on confirmation bias, bayesian updating, & source credibility that it is just inconceivable that these issues haven't been looked at. If I see something (likely now I'll take note), I'll let you know.
c.  There's lots of stuff on in-group affinities & credibility & persuasion. Our stuff is like that. But I *doubt* that the interaction of  this w/ a & b  -- & the contribution of this feedback effect in generating conflict over things like societal risks has been examined. That's exactly what your interested in, of course. But I'd start w/ a&b & see what I found!
--Dan

 

 

Saturday
Dec312011

Industrial strength risk perception measure

In my last post, I presented some data that displayed how public perceptions of risk vary across putative hazards and how perceptions of each of those risks varies between cultural subgroups.  

 The risk perceptions were measured by asking respondents to indicate on “a scale of 0-10 with 0 being ‘no risk at all’ and 10 meaning ‘extreme risk,’ how much risk [you] would ... say XXX poses to human health, safety, or prosperity.”

I call this the “Industrial Strength Measure” (ISM) of risk. We use it quite frequently in our studies, and quite frequently people ask me (in talks, in particular) to explain the validity of ISM — a perfectly good question given the generality of ISM.

The nub of the answer is that there is very good reason to expect subjects’ responses to this item to correlate very highly with just about any more specific question one might pose to members of the public about a particular risk.

The inset to the right, e.g., shows that responses to ISM as applied to  “climate change” correlates between 0.75 & 0.87 with responses (of participants in the survey featured in the last post) to more specific items that relate to beliefs about whether global temperatures are increasing, whether human activity is responsible for any such temperature rise, and whether there will be “bad consequences for human beings” if “steps are not taken to counteract” global warming. (The  ISM is "GWRISK" in the correllation matrix.) 

As reflected in the inset, too, the items as a group can be aggregated into a very reliable scale (one that has a “Cronbach’s alpha” of 0.95 — the highest score is 1.0, and usually over 0.70 is considered “good”).

That means, psychometrically, that the responses of the subjects can be viewed as indicators of a single disposition —here to credit or discredit climate change risks. One is warranted in treating the individual items as alternative indirect measures of that disposition, which itself is "latent" or unobserved.

None is a perfect measure of that disposition; they are all "noisy"--all subject to some imprecision that is essentially random.  

But when one combines such items into a composite scale, one necessarily gets a more discerning measure of the unobserved or latent variable. What they are measuring in common gets summed (essentially), and their random noise cancels out!

What goes for climate change, moreover, tends to go for all manner of risk. At the end of the post is a short annotated bibliography of articles showing that ISM correlates with more specific indicators that can be combined into valid scales for measuring particular risk perceptions.

There are two upshots of this, one theoretical and the other practical.

The theoretical upshot is that one should be wary of treating various items that have the same basic relation or valence toward a risk as being meaningfully different from each other . Risk items like these are all picking up on  a general disposition--an affective “yay” or “boo” almost. If you try to draw inferences based on subtle differences in the responses people are giving to differently worded items that reflect the same pro- or con- attitude, you are likely just parsing noise.

The second, practical upshot is that one can pretty much rely on any member of a composite scale as one's measure of a risk perception. All the members of such a scale are measuring the “same thing.” 

No one of them will measure it as well as a composite scale. So if you can, ask a bunch of related questions and aggregate the responses.

But if you can’t do that — because say, you don’t have the space in your survey or study to do it— then you can go ahead and use the ISM, e.g., which tends to be a very well behaved member of any reliable scale of this sort.

ISM isn't as discerning as a reliable composite scale, one that combines multiple items. It will be noisier than you’d like. But it is valid -- a true reflection of the the latent risk disposition-- and unbiased (will vary in the same direction as the full scale would).

A related point is that about the only thing one can meaningfully do with either a composite scale or a single measure like ISM  is assess variance.

The actual responses to such item don't have much meaning in themselves; it's goofy to get caught up on why the mean on ISM is 5.7 rather than 7.1, or whether people "strongly agree" or only "slightly agree" that the earth is heating up etc.

But one can examine patterns in the responses that different groups of people give, and in that way test hypotheses or otherwise learn something about how the latent attitude toward the risk or risks in question is being shaped by social influences.

That is, regardless of the mean on ISM, if egalitarian communitarians are 1 standard deviation above & hierarchical individualists 1 standard deviation below that mean, then you can be confident people like that really differ with respect to the latent disposition the ISM is measuring toward climate change risks.

That’s what I did with the data in my last post: I used ISM to look at variance across risks for the general public, and variance between cultural groups with respect to those same risks. 

See how much fun this can be?!

References:

  • Dohmen, T., et al. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011). Finds that a "general risk question" (the industrial grade 0-10) reliably predicts more specific risk appraisals, & behavior, in a variety of domains & is a valid & economical way to test for individual differences.
  • Ganzach, Y., Ellis, S., Pazy, A. & Tali. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008). Finding that the "single item measure of risk perception" as used in risk perception literature (the industrial grade "how risky" Likert item) better captures perceived risk of financial prospects & links finding to Slovic et al.'s "affect heuristic" in risk perception studies.
  • Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004). Reports various study findings that support the conclusion that members of the public tend to conform more specific beliefs about putative risk sources to a global affective appraisal.
  • Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002). Reporting findings that validate industrial grade measure ("how risky you perceive each situation" on 5-pt "Not at all" to "Extremely risky" Likert item) for health/safety risks & finding that it predicts both perceived benefit & risk-taking behavior with respect to particular putative risks; also links finding to Slovic et al.'s "affect heuristic."
Friday
Dec302011

U.S. risk-perception/polarization snapshot

The graphic below & to the right (click for bigger view) reports perceptions of risk as measured in a U.S. general population survey last summer.  The panel on the left reports sample-wide means; the one on the right, means by subpopulation identified by its cultural worldview. 

By comparing, one can see how culturally polarized the U.S. population is (or isn’t) on various risks ranked (in descending order) in terms of their population-wide level of importance.

Some things to note:

  • Climate change (GWRISK) and private hand gun possession (GUNRISK) seem relatively low in overall importance but are highly polarized. This helps to illustrate that the political controversy surrounding a risk issue is determined much more by the latter than by the former.
  • Emerging technologies: Synthetic biology (SYNBIO) and nanotechnology (NANO) are relatively low in importance and, even more critically, free of cultural polarization. This means they are pretty inert, conflict-wise. For now.
  • Vaccines, schmaccines. Childhood vaccination risk (VACCINES) is lowest in perceived importance and has essentially zero cultural variance. This issue gets a lot of media hype in relation to its seeming importance.
  • Holy s*** on distribution of illegal drugs (DRUGS)! Scarier than terrorism (!) and not even that polarized. (This nation won’t elect Ron Paul President.)
  • Look at speech advocating racial violence (HATESPEECH). Huh!
  • Marijuana distribution (MARYJRISK) and teen pregnancies (TEENPREG) feature hierarch-communitarian vs. egalitarian-individualist conflict. Not surprising.

Coming soon: cross-cultural cultural cognition! A comparison of US & UK.

Tuesday
Dec272011

Sood & Darley's "plasticity of harm"

Last semester I taught a seminar at Harvard Law School on “law and cognition.”  Readings consisted of about 50 or so papers, most of which featured empirical studies of legal decisionmaking.  I will now & again describe some of them. 

One of the most interesting was “The Plasticity of Harm in the Service of Punishment Goals: Legal Implications of Outcome-Driven Reasoning, ” 100 Calif. L. Rev. (forthcoming 2012), by Avani Sood—whom I convinced to attend the seminar session in which we discussed it—and John Darley (a legendary social psychologist who now does a lot of empirical legal studies).

The paper contains a set of experiments in which subjects are shown to impute “harm” more readily to behavior when it offends their moral values than when it doesn’t.  This dynamic, which reflects a form of motivated reasoning, subverts legal doctrines rooted in the liberal “harm principle”—which prohibits punishment of behavior that people find offensive but that doesn’t cause harm.

 I liked this paper a lot the first time I read it—as an early draft presented at the 2010 Conference on Empirical Legal Studies—but was all the more impressed this time by a new study S&D had added. In that study, S&D examined whether subjects’ perceptions of harm were sensitive to the message of a  political protestor who was alleged to have “harmed” bystanders by demonstrating in the nude.

S&D first conducted a “between subjects” version of the design in which one half the subjects were told that the protestor was expressing an “anti-abortion” message and the other half that the protestor was expressing a “pro-abortion” one. S&D found that subjects more readily perceived harm, and favored a more severe sanction, when the protestor’s message defied the subjects’ own positions on abortion.

That was in itself a nice result (it extended other studies in the paper by showing that diverse moral or ideological attitudes could generate systematic disagreements in perceptions of harm) but the best part was a follow-up, within-subject version of the same design, in which all subjects assessed both pro- and anti-abortion protestors. Subjects now rated the behavior of both protestores—the one whose message matched their own positoin and the one whose message didn’t—equally harmful, and deserving of equally severe punishments.

The result was valuable for S&D because it addressed a potential objection to the paper: that subjects in their various studies didn’t understand that offense to their (or others’) moral sensibilities doesn’t count as a “harm” for purposes of the law. If that had been so, then the results in the within-subject design presumably would have reflected the same correspondence between protestor message and subject ideology as the results in the between-subjects design. The difference suggested that the subjects who had evaluated only one protestor at a time had been unconsciously influenced by their own ideology to see harm conditional on their opposition to the protestor’s message.

This result in fact made me feel better about some of the cultural cognition studies that I and my collaborators have done. In a number of papers, we have been exploring the phenomenon of “cognitive illiberalism,” which for us refers exactly to the vulnerability of citizens to a form of motivated reasoning that subverts their commitment to liberal principles of neutrality in the law.

One of the possible objections to our studies was that we were assuming such a commitment—when in fact our subjects could have been consciously indulging partisan sensibilities in assessing “facts” like whether a fleeing driver had exposed pedestrians’ to a “substantial risk of death” or a political demonstrator had “shoved” or “blocked” onlookers. I think we had reason to discount this possibility before. But based on S&D’s result, we now have a lot more!

I also really like the S&D result because of what it suggests about the prospects & even the mechanics of “debiasing” in this setting.  The disparity between their between- and with-subject designs demonstrated not only that their subjects’ conscious commitment to liberal principles were being betrayed by the sensitivity of their perceptions to their ideologies. It suggested, too, that making their subjects conscious of the risk of to this sort of defeat could equip them to overcome it. 

One might be tempted to think that all one has to do is tell citizens to “consider the opposite” if one wants to counteract culturally or ideologically motivated reasoning.  Sadly, I don’t think things are that simple, at least outside the lab. But that’s a story for another time.