follow CCP

Recent blog entries
Friday
Mar162012

Cool book: Harré, Psychology for a Better World

Came across this cool book on using psychology to promote environment-friendly behavior. 

Some of the things that make it cool:

1. It presents behaviorally realistic synthesis of social norms, emotions, & reciprocity, on the one hand, and mechanisms of risk perception/cognition, on the other. 

2. It strikes a nice balance between exposition/analysis and programmatic advice.

3. It is well written & draws on lots of interesting sources.

4. The author is distributing .pdf version for free -- a gesture that provokes motive to reciprocate by producing and sharing knowledge in turn (a big theme of the book is the potential of pro-social behavior to reproduce itself by furnishing an inspiring model). 

Thursday
Mar152012

Scientists of science communication Profile #3: Ellen Peters

This is 2d installment in this series (actually, I'm negotiating w/ several companies that saw the last post & want to produce "Scientists of science communication trading cards"!)

 3. Ellen Peters.

Peters, a social psychologist at the Ohio State University, is a leading scholar of risk perception. A(nother) student of Paul Slovic, Peters's specialty (I'd say) is detecting how diverse cognitive mechanisms relate to one another.  E.g., she has done important studies establishing that "affect"--itself (Slovic and others show) a central element of myriad risk-perception heuristics--is mediator of cultural worldviews, which determine the valence (positive or negative) of affective responses, thereby generating individual differences in risk perception.

Recently, Peters has been engaged in pathbreaking work on numeracywhich refers to the capacity (disposition, really) to make sense of quantitative information and engage in quantitative reasoning. The important -- indeed, startling -- insight of her work there is that numeracy and affect are complimentary mental processes. That is, affect, rather than being a heuristic substitute for numeracy, is in fact a perceptive faculty calibrated by, and integral to the employment of quantitative reasoning. High numeracy individuals, her experiments show, do not rely on affect less than low numeracy ones but rather experience it in a more reliably discerning fashion when evaluating the expected value of opportunities for gain and loss. Numeracy, it would appear, effectively "trains" affect, which thereafter operates as an efficient scout, telling a person when he or she should engage in more effortful quantitative processing; people low in numeracy are distinguished not by greater reliance on affect, but by inchoate, confused affect.

This is a very different picture, I'd say, from the (now) dominant "system 1/system 2" conception of dual process reasoning. That framework envisions a discrete and hierarchical relationship between unconscious, affective forms of reasoning (System 1) and conscious, algorithmic ones (System 2). Peters's work, in contrast, suggests that affect and numeracy are integrated and reciprocal--that each operates on the other and that together they make complimentary contributions to sound decisionmaking.

Interestingly, though, people with high numeracy can also experience distinctive kinds of biasE.g., they will rate transactions that offer a high probability of substantial gain versus a low probability of a small loss as more attractive than transactions that offer a high probability of substantial gain versus a small probability of an outcome involving no change (positive or negative) in welfare. The reason is that the contrast between a high probability of gain and small probability of loss is more affectively arousing than the contrast between high probability of gain and nothing. But you actually have to be pretty good with numbers to receive this false affective signal! In other words, there are some kinds of attractive specious inferences that presuppose fairly high quantitative reasoning capacity.

Some key readings:

1. Peters, E. The Functions of Affect in the Construction of Preferences. in The construction of preference (eds. Lichtenstein, S. & Slovic, P.) 454-463 (Cambridge University Press, Cambridge ; New York, 2006).

2. Peters, E., Dieckmann, N., Västfjäll, D., Mertz, C.K. & Slovic, P. Bringing meaning to numbers: The impact of evaluative categories on decisions. Journal of Experimental Psychology: Applied 15, 213-227 (2009).
 
3. Peters, E. & Levin, I.P. Dissecting the risky-choice framing effect: Numeracy as an individual-difference factor in weighting risky and riskless options. Judgment and Decision Making 3, 435-448 (2008).

 

4. Peters, E., Slovic, P. & Gregory, R. The role of affect in the WTA/WTP disparity. Journal of Behavioral Decision Making 16, 309-330 (2003).

5. Peters, E., et al. Intuitive numbers guide decisions. Judgment and Decision Making 3, 619-635 (2008).

6. Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

7. Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).

8. Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

 9. Slovic, P. & Peters, E. The importance of worldviews in risk perception Risk Decision and Policy 3, 165-170 (1998).

10. Peters, E. & Slovic, P. Affective asynchrony and the measurement of the affective attitude component. Cognition Emotion 21, 300-329 (2007).

 

Friday
Mar092012

Cognitive illiberalism: anatomy of a bias

That's the title of a talk I gave today at Arizona State Law School & yesterday at the University of Arizona Law School.

The talk, which I gave to faculty-workshop audiences who had read They Saw a Protest, first offers an analytically precise account of how cultural cognition can defeat Bayesian updating. It then identifies how this form of cognitive decisionmaking bias generates "cognitive illiberalism," a legal and political decisionmaking bias that poses the same threat to constitutional freedoms as consciously illiberal forms of state action.

Probably will write this up as short paper. For now--slides here.  

Thursday
Mar082012

Misinformation and climate change conflict

reposted from Talkingclimate.org

I’m going to resist the academic’s instinct to start with a long, abstract discussion of "cultural cognition' and the theory behind it. Instead, I’m going to launch straight into a practical argument based on this line of research. My hope is that the argument will give you a glimpse of the essentials—and an appetite for delving further.

The argument has to do with the contribution that misinformation makes to the dispute over climate change. I want to suggest that the normal account of this is wrong.

The normal account envisions, in effect, that the dispute is fueled by an external force—economic interest groups, say—inundating a credulous public with inaccurate claims about risk.

I would turn this account more or less on its head: the climate change dispute, I want to argue, is fueled by a motivated public whose (unconscious) desire to form certain perceptions of risk makes it possible (and profitable) to misinform them.

As evidence, consider an experiment that my colleagues at the Cultural Cognition Project and I did.

In it, we asked  the participants (a representative sample of 1500 U.S. adults) to examine the credentials of three scientists and tell us whether they were “knowledgeable and credible expertsabout one or another risk—including climate change, disposal of nuclear wastes, and laws allowing citizens to carry concealed weapons in public. Each of the scientists (they were fictional; we told subjects that after the study) had a Ph.D. in a seemingly relevant field, was on the faculty of an elite university, and was identified as a member of the National Academy of Sciences.Whether study subjects deemed the featured scientists to be “experts,” it turned out, was strongly predicted by two things: the position we attributed to the scientists (in short book excerpts); and the cultural group membership of the subject making the determination.

Where the featured scientist was depicted as taking what we called the “high risk” position on climate change (it’s happening, is caused by humans, will have bad consequences, etc.) he was readily credited as an “expert” by subjects with egalitarian and communitarian cultural values, a group that generally sees environmental risks as high, but not by subjects with hierarchical and individualistic values, a group that generally sees environmental risks as low. However, the positions of these groups shifted—hierarchical individualists more readily saw the same scientist as an “expert,” while egalitarian comuniatarians did not—when he was depicted as taking a “low risk” position (climate change is uncertain, models are unreliable, more research necessary).

The same thing, moreover, happened with respect to the scientists who had written books about nuclear power and about gun control: subjects were much more likely to deem the scientist an “expert” when he advanced the risk position that predominated in the subjects’ respective cultural groups than when he took the contrary position.

This result reflects a phenomenon known as “motivated cognition.” People are said to be displaying this bias when they unconsciously fit their understandings of information (whether scientific data, arguments, and even sense impressions) to some goal or end extrinsic to forming an accurate answer.

The interest or goal here was the stake study subjects had in maintaining a sense of connection and solidarity with their cultural groups. Hence, the label cultural cognition, which refers to the tendency of individuals to form perceptions of risk that promote the status of their groups and their own standing within them.

Cultural cognition generates my unconventional “motivated public” model of misinformation. The subjects in our study weren’t pushed around by any external misinformation provider. Furnished the same information, they sorted themselves into the patterns that characterize public divisions we see on climate change.

This kind of self-generated biased sampling—the tendency to count a scientist as an “expert” when he takes the position that fits one’s group values but not otherwise—would over time be capable all by itself of generating a state of radical cultural polarization over what “expert scientific consensus” is on issues like climate change, nuclear power, and gun control.

In this environment, does the deliberate furnishing of misinformation add anything? Certainly.

But the desire of the public to form culturally congenial beliefs supplies one of the main incentives to furnishing them with misleading information. To protect their cultural identities, individuals more readily seek out information that supports than that challenges the beliefs that predominate in their group. The motivated public’s desire for misinformation thus makes it profitable to become a professional misinformer—whether in the media or in the world of public advocacy.

Other actors will have their own economic interest in furnishing misinformation. How effective their efforts will be, however, will still depend largely on how culturally motivated people are to accept their message. If this weren’t so, the impact of the prodigious efforts of commercial entities to convince people that climate change is a hoax, that nuclear power is safe, and that concealed-carry laws reduce crime would wear away the cultural divisions on these issues.

The reason that individuals with different values are motivated to form opposing positions on these issues is the symbolic association of them with competing groups.  But that association can be created just as readily by accurate information as by misinformation if authority figures identified with only one group end up playing a disproportionate role in communicating it.

One can’t expect to win an “information war of attrition” in an environment like this. Accurate information will simply bounce off the side that is motivated to resist it.

So am I saying, then, that things are hopeless? No, far from it.

But the only way to devise remedies for these pathologies is to start with an accurate understanding of why they occur. 

The study of cultural cognition shows that the conventional view of misinformation (external source, credulous public) is inaccurate because it fails to appreciate how much more likely misinformation is to occur and to matter when scientific knowledge becomes entangled in antagonistic cultural meanings.

How to free science from such entanglements is something that the study of cultural cognition can help us to figure out too. 

I hope  you are now interested in knowing how -- and in just knowing more!

Sources:

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) 725-760 (Springer London, Limited, 2012).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. & Braman, D. Cultural Cognition of Public Policy. Yale J. L. & Pub. Pol'y 24, 147-170 (2006).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

 

 

Saturday
Mar032012

Does economic self-interest explain climate change skepticism?

Nope.

First, some common sense:

Let's assume self-interest explains the formation of beliefs about climate change by ordinary members of the public (I'm very happy to do that). In that case, we should expect the economic impact of climate change & proposed climate change policies on the public's perception of climate change risks to be 0.00, and the impact of cultural identity to be [some arbitrarily large number].

What the ordinary member of the public believes about climate change won't have any impact on the threat it poses to the environment or on the policies society adopts to repel that threat. The same is true about how he or she votes in democratic elections or behaves as a consumer. As an individual, he or she just isn't consequential enough to matter. 

Accordingly, there is no reason to expect much if any correlation between, say, economic class, etc., and climate change risk perception.

In contrast, what an ordinary individual believes and says about climate change can have a huge impact on her interactions with her peers. If a professor on the faculty of a liberal university in Cambridge Massachusetts starts saying "cliamte change is ridiculous," he or she can count on being ostracized and vilified by others in the academic community. If the barber in some town in South Carolina's 4th congressional district insists to his  friends & neighbors that they really should believe the NAS on climate change, he will probably find himself twiddling his thumbs rather than cutting hair.

It's in people's self-interest to form beliefs that connect rather than estrange them from those whose good opinion they depend on (economically, emotionally, and otherwise).  As a result, we should expect individuals' cultural outlooks to have a very substantial impact on their climate change risk perceptions.

(For elaboration of this argument, see CCP working paper No. 89, Tragedy of the Risk Perceptions Commons.)

Second, some data:

I have constructed some regression models to examine the impact of household income (hh_income) and cultural worldviews (hfac for hierarchy and ifac for individualism) on climate change risk perceptions (z_GWRISK; for explanation of that measure, see here).  The data come from a nationally representative survey of 1500 US adults conducted by the Cultural Cognition Project with a grant from the National Science Foundation. To see the regression outputs, click on the thumbnail to the right.

The analyses show, first, that differences in income have a very small negative impact on climate change risk perceptions (B = -0.07, p < 0.01) when consdired on its own (model 1).

Second, the analyses show that cultural worldviews have a very large impact -- a typical egalitarian communitarian and a typical hierarchical individualist are separated by about 1.6 standard deviations on the risk perception measure -- controlling for income (model 2). When cultural worldviews are controlled for,  income turns out to have an effect that is practically nil (B = -0.02, p = 0.56).

But wait: the third thing the analyses show is that income does have a modest effect -- one that is conditional on survey respondents' cultural worldviews. As they become wealthier, egalitarian communitarians become slightly more concerned about climate change, while hierarchical individualists become less (Model 3).

Bottom line: economic self-interest doesn't matter; cultural identity self-interest does.

Friday
Mar022012

Coolest debiasing study (I've) ever (read)

So this is another installment (only second; first here) in my series on cool studies that we read in my fall Law & Cognition seminar at HLS.

This one, Sommers, S.R. On Racial Diversity and Group Decision Making: Identifying Multiple Effects of Racial Composition on Jury Deliberations. Journal of Personality and Social Psychology 90, 597-612 (2006), looked at the impact of the racial composition of a (mock) jury panel on white jurors. Sommers found that white jurors on mixed African-American/white panels were more likely than ones on all-white to form pro-defendant fact perceptions and to support acquittal in a case involving an African-American defendant charged with sexual assault of a white victim.

That's plenty interesting -- but the really amazing part is that these effects were not a product of any exchange of views between the white and African-American jurors in deliberations. Rather they were a product of mental operations wholly internal to the white subjects.

There were two sorts of evidence for this conclusion. First, Sommers found that the pre-deliberation verdict preferences of the white subjects on the mixed juries was already more pro-defense than the preferences of those on the all-white juries. Second, during deliberations the white subjects on the mixed juries were more likely to mention pro-defendant evidence spontaneously (that is, on their own, without prompting the by African-American ones) and less likely to inject mistaken depictions of the evidence into discussion.

In sum, just knowing that they would be deliberating with African-American jurors influenced -- indeed, demonstrably improved the quality of -- the cognition of the white jurors.

How many cool things are going on here? Lots, but here are some that really register with me:

1. OCTUSW ("Of course--that's unsurprising--so what") response is DOA & No ITMBSWILESP, either!

OCTUSW is a predictable, lame response to a lot of cool social science studies. What makes it lame is that it is common to investigate phenomena for which there are plausible competing hypotheses; indeed, the clash of competing plausible hypotheses is often what motivates people to investigate. This is one of the key points in Duncan Watt's great book, Everything Is Obvious Once You Know the Answer.

But here the result was a real surprise (to me and my students, at least) -- so we can just skip the 10 mins it usually takes to shut the (inevitably pompous & self-important) OCTUSW guy up.

At the same time, the result isn't insane-there-must-be-something-wrong-it's-like-ESP (ITMBSWILESP), either. ITMBSWILESP results can take up 30-40-50 mins & leave everyone completely uncertain whether (if they decide the study is valid, reliable) they've been duped by the researcher or (if they dismiss it out of hand) they've been taken in by their own vulnerability to confirmation bias.

2. Super compelling evidence that unconscious bias is defeating moral commitments of those experiencing it.

The results in this study suggest that the white subjects on the all-white juries were displaying a lower quality of cognitive engagement with the evidence than the whites on the mixed-race juries. Why?

The most straightforward explanation (and the animating conjecture behind the study) was that the racial composition of the jury interacted with unconscious racial bias or "implicit social cognition." Perhaps they were conforming their view of the evidence to priors founded on the correlation between race and criminality or were failing to experience a kind of investment in the interest of the defendant that would have focused their attention more effectively. 

Knowing, in contrast, that they were on a jury with African Americans, and would be discussing the case with them after considering the evidence, jolted the whites on the mixed juries into paying greater attention, likely becuause of anxiety that mistakes would convey to the African-American subjects that they didn't care very much about the possibility an African-American was being falsely accused of an interracial sexual assault. Because they paid more attention, they in fact formed a more accurate view of the facts.

But this "debiasing" effect would not have occurred unless the unconscious racial bias it dispelled was contrary to the white subjects' conscious, higher-order commitment to deciding the case impartially.

Obviously, if the white subjects in the study were committed, conscious racists, then those who served on the mixed-race juries would have gotten just as much satisfaction from forming anti-defendant verdict preferences and inaccurate, anti-defendant fact perceptions as ones on the all-white juries.

Likewise, it is not very plausible to think the whites on the mixed-race juries would have been jolted into paying more attention unless they had a genuine commitment to racial impartiality. Otherwise, why would the prospect that they'd be perceived otherwise have been something that triggered an attention-focusing level of anxiety?

The conclusion I draw, then, is that the effect of unconscious bias on the jurors in the all-white juries is something that they themselves would likely have been disappointed by.  They and others in their position would thus concur in, and not resent, the use of procedures that reduce the likelihood that this cognitive dynamic will affect them as they perform that decisionmaking task.

That’s a concluison, too, that really heartens me.

My own research on “debiasing” cultural cognition rests on the premise that identity-protective cognition (a cousin of implicit social cognition) disappoints normative commitments that ordinary citizens have. If that’s not true--if in fact, individuals would rather be guided reliably to conclusions that fit the position of “their team” than be right when they are evaluating disputed evidence on issues like climate change and the effectiveness of the HPV vaccine — then what  I’m up to is either pointless or (worse) a self-deluded contribution to public manipulation.

So when I see a study like this, I feel a sense of relief as well as hope!

 3. The debiasing effect can't be attributed to any sort of "demand effect."

This is a related point. A "demand effect" describes a result that is attributable to the motives of the subjects to please the researcher rather than to the cognitive mechanism that the researcher is trying to test.

One common strategy that sometimes is held forth as counteracting motivated cognition -- explicitly telling subjects to "consider the opposite" -- is very vulnerable to this interpretation. (Indeed, studies that look at the effect of explicit "don't be biased" instructions report highly variable results.)

But here there's really no plausible worry about "demand effect." The whites on the mixed-race juries couldn't have been "trying harder" to make the researchers happy: they had no idea that their perceptions were being compared to subjects on all-white juries, much less that those jurors were failing to engage in the evidence in as careful a way as anyone might have wanted them to.

4. The effect in this study furnishes a highly suggestive model that can spawn hypotheses and study designs in related areas.

Precisely because it seems unlikely to me that simply admonishing individuals to be "impartial" or "objective" can do much real good, the project to identify devices that trigger effective unconscious counterwights to identity-protective cognition strikes me as of tremendous importance.

We have done a variety of studies of this sort. Mainly they have focused on devices -- e.g., message framings, and source credibility -- that neutralize the kinds of culturally threatening meanings that provoke defensive resistance to sound information.

The debiasing effect here involves a different dynamic. Again, as I understand it, the simple awareness that there were African-Americans on their jury activated white jurors' own commitment to equality, thereby leading them to recruit cognitive resources that in fact promoted that commitment.

Generalizing, then, this is to me an example of how effective environmental cues (as it were) can activate unconscious processes that tie cognition more reliably to ends that individuals, at least in the decisionmaking context at hand, value more than partisan group allegiances. 

Seeing the study this way, I now often find myself reflecting on what sorts of cues might have analogous effect in cultural cognition settings.

That's something cool studies predictably do. They not only improve understanding of the phenomena they themselves investigated. They also supply curious people with vivid, generative models that help them to imagine how they might learn, and teach others something, too. 

Thursday
Mar012012

Is the "culture war" over for guns?

One of the students in my HLS criminal law class drew my (and his classmates') attention to this poll showing that a pretty solid majority (73%) of Americans now oppose banning handguns. What caused this? Did the Supreme Court's 2nd Amendment opinions (Heller and MacDonald) change norms? Or induce massive cognitive dissonance avoidance? Or maybe the NRA is behind the new consensus? Or maybe the public finally learned of the scientific consensus that there's no reliable evidence that concealed-carry laws have any impact on crime one way or the other? Is there a model here to follow for ending the culture war on climate change? Or maybe the climate change battle just made people forget this one?

Saturday
Feb252012

More evidence that good explanations of climate change conflict are not depressing

I explained recently (here & here)  why it is a mistake to conclude that cultural cognition implies that trying to resolve the climate change conflict is "futile" (not to mention a fallacious reason for rejecting the evidence that the cultural cognition explains the conflict).

Today I came across a great paper that extends the theme "good social science explanations of climate change conflict are not depressing":

Law, Environment, and the 'Non-Dismal' Social Sciences

U of Colorado Law Legal Studies Research Paper No. 12-01 

Boyd, William, Univ. Colorado Law School
Kysar, Douglas A., Yale Law School
Rachlinski, Jeffrey J., Cornell Law School 


Abstract:   Over the past 30 years, the influence of economics over environmental law and policy has expanded considerably. Whereas politicians and commentators once seriously questioned whether tradable emissions permits confer a morally illicit “right to pollute,” today even environmental advocacy organizations speak freely and predominantly in terms of market instruments and economic efficiency when they address climate change and other pressing environmental concerns. This review seeks to counterbalance the expansion of economic reasoning and methodology within environmental law and policy by highlighting insights to be gleaned from various “non-dismal” social sciences. In particular, three areas of inquiry are highlighted as illustrative of interdisciplinary work that might help to complement law and economics and, in some cases, compensate for it: the study of how human individuals perceive, judge, and decide; the observation and interpretation of how knowledge schemes are created, used, and regulated; and the analysis of how states and other actors coordinate through international and global regulatory regimes. The hope is to provide some examples of how environmental law and policy can be improved by deeper and more diverse engagement with social science and to highlight avenues for future research.

Boyd, William, Univ. Colorado Law SchoolKysar, Douglas A., Yale Law SchoolRachlinski, Jeffrey J., Cornell Law School 

Wednesday
Feb222012

Climate change & the media: what's the story? (Answer: expressive rationality)

Max Boykoff has written a cool book (material from which played a major role in a panel session at the 2012 Ocean Sciences conference) examining media coverage of climate change in the U.S. 

Who Speaks for the Climate? documents in a more rigorous and informative way than anything I've ever read the conservation of "balance" in the media coverage of the climate change debate no matter how lopsided the scientific evidence becomes.

Boykoff's own take -- and that of pretty much everyone I've heard comment on this phenomenon -- is negative: there is something wrong w/ norms of science journalism or the media generally if scientifically weak arguments are given just as much space & otherwise treated just as seriously as strong ones.

I have a slightly different view: "balanced" coverage is evidence of the expressive rationality of public opinion on climate change.

News media don't have complete freedom to cover whatever they want, however they want to. Newspapers and other news-reporting entities are commercial enterprises. To survive, they must cover the stories that people want to read about.

What people want to read are stories containing information relevant to their personal lives. Accordingly, one can expect newspapers to cover the aspect of the "climate change story" that is most consequential for the well-being of their individual readers.

The aspect of the climate change story that's most consequential for ordinary members of the public is that there's a bitter, persistent, culturally polarized debate over it. Knowing that has a much bigger impact on ordinary individuals than knowing what the science is.

Nothing an individual thinks about climate change will affect the level of risk that climate change poses for him or her. That individual's behavior as consumer, voter, public discussant, etc., is just too small to have any impact --either on how carbon emissions affect the environment or on what governments do in response. 

However, the position an individual takes on climate change can have a huge impact on that' person's individual social standing within within his or her community.  A university professor in New Haven CT or Cambridge Mass. will be derisively laughed at and then shunned if he or she starts marching around campus with a sign saying "climate change is a hoax!" Same goes for someone in a mirror image hierarchical-individualistic community (say, a tobacco farmer living somewhere in South Carolina's 4th congressional district) who insists to his  friends & neighbors, "no, really, I've looked closely at the science -- the ice caps are melting because of what human beings are doing to the environment." 

In other words, it's costless for ordinary individuals to take a positon that is at odds with climate science, but costly to take one that has a culturally hostile meaning within groups whose support (material, emotional & otherwise) they depend on.

Predictably, then, individuals tend to pay a lot of attention to whatever cues are out there that can help them identify what cultural meanings (if any) a disputed risk or related fact issue conveys, and to expend a lot of cognitive effort (much of it nonconscious) to form beliefs that avoid estranging them their communities.

Predictably, too, the media, being responsive to market forces, will devote a lot more time and effort to reporting information that is relevant to identifying the cultural meaning of climate change than to information relevant to determining the weight or the details of scientific evidence on this issue.

So my take on Boykoff's evidence is different from his.

But it is still negative.

It might be individually rational for people to fit their perceptions of climate change and other societal risks to the positions that predominate in their communities but it is nevertheless collectively irrational for them all to form their beliefs this way simultaneously: the more impelled culturally diverse individuals are to form group-congruent beliefs rather than truth-congruent ones, the less likely democratic institutions are to form policies that succeed in securing their common welfare.

The answer, however, isn't to try to change the norms of the media. They will inevitably cover the story that matters to us.

What we need to do, then, is change the story on climate change. We need to create new meanings for climate change that liberate science from the antagonistic ones  that now make taking the "wrong" position (any position) tantamount to cultural treason.

Tuesday
Feb212012

Ocean Science Meeting science communication panel

Here's where I am (or will be in few hrs).  

Plan to say (1) there is a science of science communication; (2) it has assembled a good deal of data on why the public is divided on climate change; (3) what that data show is that the explanation is neither lack of scientific knowledge nor the inability to engage scientific information in a rational or systematic fashion ("system 2" etc); (4) what does explain conflict is motivated reasoning (cultural & otherwise); and (5) dispelling the conflict requires communication strategies that are responsive to this dynamic.

More later!

 

Monday
Feb202012

Could geoengineering cool the climate change debate?

Geoengineering (according to the National Academy of Sciences) “refers to deliberate, large-scale manipulations of Earth’s environment designed to offset some of the harmful consequences of [greenhouse-gas induced] climate change.” But what impact might the advent of this emerging technology have on the science-communication environment in which the public makes sense of the evidence for climate change and its significance?

Geoengineering is still very much at the drawing board stage, but the sketches of what it might look like—from solar-reflective nanotechnology flying saucers to floating mist-emitting “cloud whiteners”—are pretty amazing.

The U.S. National Academy of Sciences and the Royal Society in the U.K. are among the preeminent scientific authorities that have called for stepped up research efforts to develop geoengineering—and to assess the risks that it might itself pose to the physical environment.

Also very much in need of research (and getting it from an expert UK team that includes Nick Pidgeon) are the science-communication challenges that geoengineering is likely to confront.

Indeed, anxiety over the impact that geoengineering could have on public opinion is now putting research into the underlying science at risk. 

All the issues surrounding geoengineering, including the ethical ones, obviously demand open public deliberation.

But critics oppose even permitting research to begin lest it lull the public into a state of false security that will enervate any support for carbon emission limits—a dynamic labeled (mislabeled really, given the well-established and familiar technical meaning of the term in economics) the “moral hazard” effect.  

Political resistance fueled by this argument resulted in postponement of a very rudimentary scientific experiment (one involving the operation of a high-pressure water hose attached to a helium balloon) that was supposed  to be conducted by scientists at Cambridge University last fall.

CCP recently conducted a study to see what impact geoengineering might have on the science-communication environment. We found no support for the “moral hazard” hypothesis.  Indeed, the study, which was conducted with both US and UK subjects, found that geoengineering might well improve the quality of public deliberations by reducing cultural polarization over climate change science.

The study involved an experiment in which subjects assessed a scientific study on climate change. The study (a composite of two, which appeared in Nature and Proceedings of the National Academies of Sciences) reported researchers’ conclusion that previous projections of carbon dissipation had been too optimistic and that significant environmental harm could be anticipated no matter how much carbon emissions were reduced in the future.

The subjects, all of whom read the dissipation study, were divided into three groups, each of which was assigned to read a different mock newspaper article. Subjects in the “anti-pollution” condition read an article that reported the recommendation of scientists for even stricter CO2 limits. Subjects in the “geoengineering condition” read an article that reported the recommendation of scientists for research on geoengineering, on which the article also supplied background information.

Finally, a “control condition” group read an article about a municipality’s decision to require construction companies to post bonds for the erection of traffic signals in housing developments.

Logically speaking, what one proposes to do about climate change (implement stricter carbon emission limits, investigate geoengineering, or even put up more traffic signals) has no bearing on the validity of a scientific study that purports to find that climate change is a more serious problem than previously had been understood.

But psychologically one might expect which newspaper article subjects read to make a difference. The “moral hazard” argument, for example, posits that information about geoengineering will induce members of the public to discount the seriousness of the threat that climate change poses.

That’s not what we found, however. Indeed, contrary to the “moral hazard” hypothesis, subjects in the geoengineering were slightly more concerned than ones in the anti-pollution and control conditions.

We also found that the experimental assignment affected how culturally polarized the study subjects (in both countries) were. The subjects in the anti-pollution condition were the most polarized over the validity of study (whether computer models are reliable, whether the researchers were biased, etc.); subjects in the geoengineering condition were the least.

 We had hypothesized this pattern based on cultural cognition research.

That research shows that individuals tend to form perceptions of risk that fit their values. Thus, egalitarian communitarians, who are morally suspicious of commerce and industry, find it congenial to believe those activities are dangerous and thus worthy of regulation. Hierarchical individualists, in contrast, tend to be dismissive of environmental risk claims, including climate change, because they value commerce and industry and perceive (unconsciously) that such claims will result in their being restricted.

These meanings were reinforced by the newspaper article in the anti-pollution condition, resulting in the two groups becoming even more divided in that condition on the validity of the carbon-dissipation study.

But the information on geoengineering, we posited, would dissipate the usual cultural meanings associated with climate change science. Because it shows that there are policy responses aside from restricting commerce and industry, information on geoengineering reduces the threat that evidence of climate change poses to hierarchical individualist sensibilities and thus the psychic incentive to dismiss that evidence out of hand.

This conjecture was the basis for predicting the depolarization effect actually observed in the geoengineering condition.

What’s the upshot?

Well, certainly not that geoengineering should be embraced as a policy solution to climate change. Whether that’s a good idea depends on the sort of research that the Royal Society and National Academy of Sciences have proposed.

Moreover, although this study furnishes evidence that engaging in that sort of research—and inviting public discussion of its implications—will actually improve the science communication environment, rather than harm it as the “moral hazard” position asserts, that proposition, too, certainly merits further research.

But the one conclusion I think can be made without qualification is that claims about the impact of scientific research on public risk perceptions, just like ones about the impact of human activity on the environment, admit of scientific investigation. 

When predictions of adverse public reactions are not only advanced without any supporting evidence but also asserted as decisive reason to block scientific inquiry, there should be little doubt that those making them lack a genuine commitment to the principles of science.

References:

Allen, M.R., et al. Warming caused by cumulative carbon emissions towards the trillionth tonne. Nature458, 1163-1166 (2009).

Corner, A. & Pidgeon, N. Geoengineering the Climate: The Social and Ethical Implications. Environment: Science and Policy for Sustainable Development 52, 24-37 (2010).

Hamilton, C. Ethical Anxieties About Geoengineering: Moral hazard, slippery slope and playing God.  (unpublished, Sept. 27, 2011).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012), pp. 725-60.

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92 (Jan. 9, 2012). 

National Research Council. Advancing the Science of Climate Change, (The National Academies Press, 2010).

National Research Council. America's Climate Choices, (The National Academies Press, 2011).

Parkhill, K. & Pidgeon, N. Public Engagement on Geoengineering Research: Preliminary Report on the SPICE Deliberative Workshops, Understanding Risk Working Paper 11-01 (Understanding Risk Research Group, Cardiff University, June 2011).

Parson, E. Reflections on Air Capture: the political economy of active intervention in the global environment. Climatic Change 74, 5-15 (2006).

Royal Society. Geoengineering the climate: science, governance and uncertainty, (Royal Society, London, 2009).

Solomon, S., Plattner, G.-K., Knutti, R. & Friedlingstein, P. Irreversible climate change due to carbon dioxide emissions. Proceedings of the National Academy of Sciences 106, 1704-1709 (2009).

Time to act. Nature 458, 1077-1078 (2009).


Saturday
Feb182012

Report from Garrison Institute Climate Change conference: the good & not so good...

As noted previously, I attended the Garrison Institute meeting on Climate, Mind and Behavior.

On positive side, the highlight, in my view, was very interesting presentation by George Marshall.

George Marshall: He gets science communication!Marshall, a man of apparently unbounded curiosity, creativity, and public spirit, is organizing a set of related initiatives aimed at improving climate-change science communication. 

One of these is http://talkingclimate.org/, essentially a mega-wearhousing facility for collecting, organizing, & promoting transmission of empirical studies on communication.

Another is a research project aimed at production of effective targeted messaging. Marshall outlined a research protocol that is, in my view, just what's needed because it focuses on fine grained matching of cultural meanings to the diverse information-processing dispositions that exist in the public. It uses empirical measurement at every stage -- from development of materials, to lab testing, to follow-up work in field in collaboration with professional communicators.

This is exactly the systematic approach that tends to be missing from climate change science communication, which is dominated by impressionistic throw-everything-against-the-wall-but-don't-bother-measuring-what-sticks strategy...  Marshall offered a devastating (and devastatingly funny) analysis of that. 

I look forward to the distribution of the video of his talk (the organizers were filming all the presentations).

On downside:

1.  Goldilocks was also there. Lots of just-so story telling -- "engage emotions ... but don't scare or numb" -- based on ad hoc mix and match of general psychological mechanisms w/o evidence on how they play out in this context (indeed, in disregard of the evidence that actually exists). The antithesis, really, of the careful, deliberate, fine-grained, and genuinely empirical approach that Marshall's protocol embodied. Sigh...

2. I was also genuinely shocked & saddened by what struck (assaulted) me as the anti-science ethos shared by a large number of participants.  

Multiple speakers disparaged science for being "materialistic" and for trying to "put a number on everything." One, to approving nods of audience, reported that university science instruction had lost the power to inspire "wonder" in students because it was disconnected from "spiritual" (religious, essentially) sensibilities.  

For anyone who is inclined to buy that, I strongly recommend watching The Relation of Mathematics to Physics, Lecture 2 of Richard Feynman's 1964 Messenger Lectures on the Character of Physical Law!

Actually, I think it is a huge problem in our culture that we don't make it as easy for people who have a religious outlook and love science (there are many of them!) as it is for those who have a more secular outlook & love it to participate in the thrill and wonder of knowing about what we know about nature.

But that problem is one rooted in an imperfect realization of the Liberal ideal of making all the resources of a good society (including access to its immense and inspiring knowledge of nature!) available to all citizens irrespective of their cultural worldviews or moral/political outlooks.

Those who ridicule science for being insufficiently "spiritual" or for being excessively "materialistic" etc. are engaged in a form of illiberal discourse.  They are entitled to pursue their own vision of the best way to live but should show respect -- when engaged in civic deliberations -- for those who see virtue and excellence in other aspects of the human experience.

That these anti-liberals happen to be concerned about climate change does not excuse their cultural intolerance.

Thursday
Feb162012

Slides from Garrison Institute talk

Gave talk today on "Climate Change and the Science Communication Problem" at Garrison Institute's Climate, Mind and Behavior Initiative.  Basic gist -- "it's cultural cognition, not deficiencies in rationality, so communicate meaning and not just content" -- is clear from the slides, which are here.

 

 

Wednesday
Feb152012

Scientists of science communication: Profiles #1 & #2

There is no invisible hand that guides valid scientific knowledge into the beliefs of ordinary citizens whose lives it could improve.

If simple logic doesn't make that clear, then historical experience ceratinly does -- from the public's rejection of "expert consensus" on deep geologic isolation of nuclear wastes to the massive backlash today against the CDC's proposal for universal vaccination of girls against HPV (just to name a couple that come to mind).

The emerging science of science of science communication uses scientific methods (drawn from a variety of disciplines) to identify the processes that enable nonexperts to recognize valid scientific knowledge, the dynamics that predictably disrupt those processes, and the steps that can be taken to preempt those dynamics or to reverse them when they are not successfully averted.

I will post now & again (very brief) profiles of scholars who are doing important work in this high interdisciplinary field.

One explanatory note, though: after the first entry, the profiles will not be based on any assessment on my part of the contribution the individual has made to the science of science communication. Pretty much going to list in random-ass order ones that I happen to think of at the time!

1. Paul Slovic. Slovic invented the field of public risk perceptions with his pioneering work on the "psychometric paradigm" in the late 1980s (e.g., Slovic, P. Perception of risk. Science 236, 280-285,  (1987)) and is the scholar whose work in the last decade crystallized the "affect heuristic," which identifies the decisive role of emotional perception as the faculty of cognition most consequential to the formation of lay perceptions of risk (e.g., Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004)). Through his teaching and collaborations, moreover, he is also contributed immeasurably to the ability of countless other scholars to contribute to the advancement of knowledge in the risk perception and communication field (just as math has its Erdös number, so the field of public risk perception as its Slovic number!).  Many of his key works (not all; it would take a library to assemble them) can be found in two collections: Slovic, P. The Perception of Risk, (Earthscan Publications, London ; Sterling, VA, 2000)Slovic, P. The feeling of risk : new perspectives on risk perception, (Earthscan, London ; Washington, DC, 2010).

 2. James N. Druckman.  Druckman, the Payson S. Wild Professor of Political Science and Faculty Fellow at the Institute for Policy Research at Northwestern University, is, to my mind, a great model of what a genuine science of science communication looks like. An editor of Public Opinion Quarterly. He is a first-rate-- world-class even -- political scientist, who has done immensely work on framing (e.g., Druckman, J.N. Political Preference Formation: Competition, Deliberation, and the (Ir)relevance of Framing Effects. American Political Science Review 98, 671-686 (2004)). At the same time, he has turned his attention systematically to the way in which political economy and political psychology interact with (and can distinctively distort) societal dissemination of scientific information (e.g. Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011)). What's more, he doesn't just grab recognized mechanisms (one he is worked on or is simply familiar with from the general political psychology literature) and use them as a story-telling simulacrum of explanation; he conjectures and tests with actual science communication phenomena.  We need more Druckmans: people whoare not only great social scientists but who get that there is a distinctive set of processes affecting the dissemination of policy-relevant science and who are genuinely involved in empirically studying them. 

Tuesday
Feb142012

The ideological symmetry of motivated reasoning, round 15

Okay, so Chris Mooney decides to get me in a place where he can swat me down like an annoying flea buzzing in his ear on this "asymmetry question." As I was in a big hole in terms of arguments & evidence, I had to resort to chicanery: by personally displaying more motivated reasoning than anyone would have thought humanly possible during a 30-minute period, I managed to demonstrate to the satisfaction of all objective observers that this barrier to open-minded consideration of evidence is not confined to conservatives.


Friday
Feb102012

Whoa, slow down: public conflict over climate change is more complicated than "thinking fast, slow"

With the (deserved) popularity of Kahneman's accessible and fun synthesis "Thinking Fast and Slow" has come a (predictable) proliferation of popular commentaries attributing public dissensus over climate change to Kahneman's particular conceptualization of dual process reasoning.

Scientists, the argument goes, determine risk using the tools and habits of mind associated with "slow," System 2 thinking, which puts a premium on conscious reflection.

Lacking the time and technical acumen to make sense of complicated technical information, ordindary citizens (it's said) use visceral, affect-driven associations--system 1. Well, climate change provokes images -- melting ice, swimming polar bears -- that just aren't as compelling, as scary as, say, terrorism (fiery skyscrapers with the ends of planes sticking out of them, etc.). Accordingly, they underestimate the risks of climate change relative to a host of more gripping threats to health and safety that scientific assessment reveals to be smaller in magnitude. 

This is not a new argument. Scholars on risk perception have been advancing it for years (and reiterating/amplifying it as time passes).

The problem is that it is wrong.  Empirically demonstrably false.

Consider: 

  • Variance in the disposition to use "fast" (heuristic, affect-driven, system 1) as opposed to "slow" (conscious, reflective, deliberate system 2) modes of reasoning explains essentially none of the variance in public perception of climate change risks. In fact, when one correlates climate change risk perceptions with these dispositions, one finds that the tendency to rely on system 2 (slow) rather than 1 (fast) is associated with less concern, but the impact is so small as to be practically irrelevant. 

  • What does explain variance in climate change risk perception -- evidence shows, and has for years -- are cultural or ideological dispositions. There is a huge gulf between citizens subscribing to a hierarchical and individualistic worldview, who attach high symbolic and material value to commerce and industry and who discount all manner of environmental and technological risk, and citizens subscribing to an egalitarian and communitarian worldview, who associate commerce and industry with unjust social disparities.

 

  • Because climate change divides members of the public on cultural grounds, it must be the case that ordinary individuals who use system 1 ("fast") modes of reasoning form opposing intuitive or affective reactions to climate change -- "scary" for egalitarians and communitarians, "enh" for hierarchical individualists. Again, evidence bears this out! (Ellen Peters, a psychologist who studies the contribution that affect, numeracy, and cultural worldviews make to risk perception has done the best study on how cultural worldviews orient system 1/affective perceptions of risk, in my view.)

  • Individuals who are disposed to use system 2 ("slow") are not more likely to hold beliefs in line with the scientific consensus on climate change. Instead, they are even more culturally polarized than individuals who are more disposed to use "fast," system 1 reasoning. This is a reflection of the (long-established but recently forgotten) impact of motivated reasoning on system 2 forms of reasoning (i.e., conscious, deliberate, reflective forms). 

 

So why do so many commentators keep attributing the climate change controversy to system 1/2 or "fast/slow"?

The answer is  system 1/2 or "fast/slow": that framework recommends itself -- is intuitively and emotionally appealing (especially to people frustrated over the failure of scientific consensus to make greater inroads in generating public consensus) and ultimately a lot easier to get than the empirically supported findings.

This is in fact part of the explanation for the "story telling" abuse of decision science mechanisms that I discussed in an earlier post.

There's only one remedy for that: genuinely scientific thinking.

Just as we are destined not to solve the problems associated with climate change without availing ourselves of the best available science on how the climate works, so we are destined to continue floundering in addressing the pathologies that generate public dissensus over climate change and a host of other issues unless we attend in a systematic, reflective, deliberate way to the science of science communication.

Monday
Feb062012

Do people with higher levels of "science aptitude" see more risk -- or less -- in climate change?

The answer — as it was for “do more educated people see more risk or less”—is neither. Until one takes their cultural values into account.

The data were collected in a survey (the same one discussed in the earlier post) of 1500 US adults drawn from a nationally representative panel. My colleagues and I measured the subjects’ climate change risk perceptions with the “Industrial Strength Measure.”

We also had them complete two tests: one developed by the National Science Foundation to measure science literacy; and another used by psychologists to measure “numeracy,” which is the capacity to engage in technical reasoning (what Kahneman calls “System 2”). Responses to these two tests form a psychometrically valid and reliable scale that measures a single disposition, one that I’m calling “science aptitude” here.

As we report in a working paper, science aptitude (and each component of it of it) is negatively correlated with climate change risk perceptions—i.e., as science literacy and numeracy go up, concern with climate change goes down. But by an utterly trivial amount (r = 0.09) that no one could view as practically significant—much less as a meaningful explanation for public conflict over climate change risks.

A reporter asked me to try to make this more digestible by computing the number of science-aptitude questions (out of 22 total) that were answered correctly (on average) by individuals who were less concerned with climate change risks and by those who were more concerned. The answer is: 12.6 vs. 12.3, respectively. Still a trivial difference.

But as we make clear in the working paper, the inert effect of science literacy and numeracy when the sample is considered as a whole obscures the impact that science aptitude actually does have on climate change risks when subjects are assessed as members of opposing cultural groups.

Egalitarian communitarians—the individuals who are most concerned about climate change in general—become more concerned as they become more science literate and numerate. In contrast, hierarchical individualists—the individuals who are least concerned in general—become even less concerned.

The result is that cultural polarization, which is already substantial among people low in science aptitude, grows even more pronounced among individuals who are high in science aptitude.

Or to put it another way, knowing more science and thinking more scientifically doesn’t induce citizens to see things the way climate change scientists do. Instead, it just makes them more reliable indicators of what people with their values think about climate change generally.

This doesn’t mean that science literacy or numeracy causes conflict over climate change. The antagonistic cultural meanings in climate change communication do.

But because antagonistic cultural meanings are the source of the climate-change-debate pathology, just administering greater and greater does of scientifically valid information can't be expected to cure it.

We don’t need more information. We need better meanings.

Sunday
Feb052012

Cultural consensus worth protecting: robots are cool!

Just a couple of yrs ago there was concern that artificial intelligence & robotics might become the next front for the "culture war of fact" in US.

Well, good news: Everyone loves robots! Liberals & conservatives, men & women (the latter apparently not as much, though), rich & poor, dogs & cats!

We all know that the Japanese feel this way, but now some hard evidence -- a very rigorous poll conducted by Sodahead on-line research -- that there is a universal warm and fuzzy feeling toward robots in the US too.

This is, of course, in marked contrast to the cultural polarization we see in our society over climate change, and is thus a phenomenon worthy of intense study by scholars of risk perception.

But the contrast is not merely of academic interest: the reservoir of affection for robots is a kind of national resource -- an insurance policy in case the deep political divisions over climate change persist.

If they do, then of course we will likely all die, either from the failure to stave off climate-change induced environmental catastrophe or from some unconsidered and perverse policy response to try to stave off catastrophe.

And at that point, it will be up to the artificially intelligent robots to carry on.

You might think this is a made up issue. It's not. Even now, there are misguided people trying to sow the seeds of division on AI & robots, for what perverse, evil reason one can only try to imagine.

We have learned a lot about science communication from the climate change debacle. Whether we'll be able to use it to cure the science-communication pathology afflicting deliberations over climate change is an open question.  But we can and should at least apply all the knowledge that studying this impasse has generated to avoid the spread of this disease to future science-and-technology issues. 

And I for one can't think of an emerging technology more important to insulate from this form of destructive and mindless fate than artificial intelligence & robotics!

******

 

disclaimer: I love robots!! So much!!!
Maybe that is unconsciously skewing my assessment of the issues here (I doubt it, but I did want to mention).

Friday
Feb032012

Two common (& recent) mistakes about dual process reasoning & cognitive bias

"Dual process" theories of reasoning -- which have been around for a long time in social psychology -- posit (for the sake of forming and testing hypotheses; positing for any other purpose is obnoxious) that there is an important distinction between two types of mental operations.

Very generally, one of these involves largely unconscious, intuitive reasoning and the other conscious, reflective reasoning.

Kahneman calls these "System 1" and "System 2," respectively, but as I said the distinction is of long standing, and earlier dual process theories used different labels (I myself like "heuristic" and "systematic,” the terms used by Shelley Chaiken and her collaborators; the “elaboration likelihood model” of Petty & Cacioppo uses different labels but is very similar to Chaiken’s “heuristic-systematic Model”).

Kahneman's work (including most recently his insightful and fun synthesis “Thinking Fast, Slow”) has done a lot to focus attention on dual process theory, both in scholarly research (particularly in economics, law, public policy & other fields not traditionally frequented by social psychologists) and in public discussion generally.

Still, there are recurring themes in works that use Kahneman’s framework that reflect misapprehensions that familiarity with the earlier work in dual process theorizing would have steered people away from.

I'm not saying that Kahneman — a true intellectual giant — makes these mistakes himself or that it is his fault others are making them. I'm just saying that it is the case that these mistakes get made, with depressing frequency, by those who have come to dual process theory solely through the Kahneman System 1-2 framework.

Here are two of those mistakes (there are more but these are the ones bugging me right now).

1. The association of motivated cognition with "system 1" reasoning.  

"Motivated cognition," which is enjoying a surge in interest recently (particularly in connection with disputes over climate change), refers to the conforming of various types of reasoning (and even perception) to some goal or interest extrinsic to that of reaching an accurate conclusion.  Motivated cognition is an unconscious process; people don't deliberately fit their interpretation of arguments or their search for information to their political allegiances, etc. -- this happens to them without their knowing, and often contrary to aims they consciously embrace and want to guide their thinking and acting.

The mistake is to think that because motivated cognition is unconscious, it affects only intuitive, affective, heuristic or "fast" "System 1" reasoning. That's just false. Conscious, deliberative, systematic, "slow" "System 2" can be affected be affected as well. That is, commitment to some extrinsic end or goal -- like one's connection to a cultural or political or other affinity group -- can unconsciously bias the way in which people consciously interpret and reason about arguments, empirical evidence and the like.

This was one of the things that Chaiken and her collaborators established a long time ago. Motivated systematic reasoning continues to be featured in social psychology work (including studies associated with cultural cognition) today.

One way to understand this earlier and ongoing work is that where motivated reasoning is in play, people will predictably condition the degree of effortful mental processing on its contribution to some extrinsic goal. So if relatively effortless heuristic reasoning generates the result that is congenial to the extrinsic goal or interest, one will go no further. But if it doesn't -- if the answer one arrives at from a quick, impressionistic engagement with information frustrates that goal -- then one will step up one's mental effort, employing systematic (Kahneman's "System 2") reasoning.

But employing it for the sake of getting the answer that satisfies the extrinsic goal or interest (like affirmation of one's cultural identity defining group). As a result, the use of systematic or "System 2" reasoning will thus be biased, inaccurate.

But whatever: Motivated cognition is not a form of or a consequence of "system 1" reasoning. If you had been thinking & saying that, stop. 

2.  Equation of unconscious reasoning with "irrational" or biased reasoning, and equation of conscious with rational, unbiased.

The last error is included in this one, but this one is more general.

Expositors of Kahneman tend to describe "System 1" as "error prone" and "System 2" as "reliable" etc.

This leads lots of people to think that that heuristic or unconscious reasoning processes are irrational or at least "pre-rational" substitutes for conscious "rational" reasoning. System 1 might not always be biased or always result in error but it is where biases, which, on this view, are essentially otherwise benign or even useful heuristics that take a malignant turn, occur. System 2 doesn't use heuristics -- it thinks things through deductively, algorithmically  -- and so "corrects" any bias associated with heuristic, System 1 reasoning.

Wrong. Just wrong. 

Indeed, this view is not only wrong, but just plain incoherent.

There is nothing that makes it onto the screen of "conscious" thought that wasn't (moments earlier!) unconsciously yanked out of the stream of unconscious mental phenomena. 

Accordingly, if a person's conscious processing of information is unbiased or rational, that can only be because that person's unconscious processing was working in a rational and unbiased way -- in guiding him or her to attend to relevant information, e.g., and to use the sort of conscious process of reasoning (like logical deduction) that makes proper sense of it.

But the point is: This is old news! It simply would not have occurred to anyone who learned about the dual process theory from the earlier work to think that unconscious, heuristic, perceptive or intuitive forms of cognition are where "bias" come from, and that conscious, reflective, systematic reasoning is where "unbiased" thinking lives.

The original dual process theorizing conceives of the two forms of reasoning as integrated and mutually supportive, not as discrete and hierarchical. It tries to identify how the entire system works -- and why it sometimes doesn't, which is why you get bias, which then, rather than being "corrected" by systematic (System 2) reasoning, distorts it as well (see motivated systematic reasoning, per above).

Even today, the most interesting stuff (in my view) that is being done on the contribution that unconscious processes like "affect" or emotion make to reasoning uses the integrative, mutually supportive conceptualization associated with the earlier work rather than the discrete, hierarchical conceptualization associated (maybe misassociated; I'm not talking about Kahneman himself) with System 1/2.

Ellen Peters, e.g., has done work showing that people who are high in numeracy -- and who thus posses the capacity and disposition to use systematic (System 2) reasoning -- don't draw less on affective reasoning (System 1...) when they outperform people who are low in spotting positive-return opportunities. 

On the contrary, they use more affect, and more reliably.

In effect, their unconscious affective response (positive or negative) is what tells them that a "good deal" — or a raw one — might well be at hand, thus triggering the use of the conscious thought needed to figure out what course of action will in fact conduce to the person's well-being.

People who aren't good with numbers respond to these same situations in an affectively flat way, and as a result don't bother to engage them systematically.

This is evidence that the two processes are not discrete and hierarchical but rather are integrated and mutually supportive.  Greater capacity for systematic (okay, okay, "system 2"!) reasoning over time calibrates heuristic or affective processes (system 1), which thereafter, unconsciously but reliably, turns on systematic reasoning.

So: if you had been thinking or talking as if  System 1 equaled "bias" and System 2 "unbiased, rational," please just stop now.

Indeed, to help you stop, I will use a strategy founded in the original dual process work.

As I indicated, believing that consciousness leaps into being without any contribution of unconsciousness is just incoherent. It is like believing in "spontaneous generation."  

Because the idea that System 2 reasoning can correct unconscious bias without the prior assistance of unconscious, system 1 reasoning is illogical, I propose to call this view "System 2 ab initio bias.”

The effort it will take, systematically,  to figure out why this is an appropraite thing for someone to accuse you of if you make this error will calibrate your emotions: you'll come to be a bit miffed when you see examples; and you'll develop a distinctive (heuristic) aversion to becoming someone who makes this mistakes and gets stigmatized with a humiliating label.

And voila! -- you'll be as smart (not really; but even half would be great!) as Shelly Chaiken, Ellen Peters, et al. in no time!

References:

Chaiken, S. & Maheswaran, D. Heuristic Processing Can Bias Systematic Processing - Effects of Source Credibility, Argument Ambiguity, and Task Importance on Attitude Judgment. Journal of Personality and Social Psychology 66, 460-473 (1994).

Chaiken, S. & Trope, Y. Dual-process theories in social psychology, (Guilford Press, New York, 1999).

Chen, S., Duckworth, K. & Chaiken, S. Motivated Heuristic and Systematic Processing. Psychol Inq 10, 44-49 (1999).

Dunning, E.B.a.D. See What You Want to See: Motivational Influences on Visual Perception. Journal of Personality and Social Psychology 91, 612-625 (2006).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing Under Defense Motivation. Pers Soc Psychol B 23, 84-97 (1997).

Hsee, C.K. Elastic Justification: How Unjustifiable Factors Influence Judgments. Organ Behav Hum Dec 66, 122-129 (1996).

Kahan, D.M. The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1 (2011). 

Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Kahneman, D. Thinking, fast and slow, (Farrar, Straus and Giroux, New York, 2011).

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93, 1449-1475 (2003).

Kunda, Z. The Case for Motivated Reasoning. Psychological Bulletin 108, 480-498 (1990).

Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

Peters, E., Slovic, P. & Gregory, R. The role of affect in the WTA/WTP disparity. Journal of Behavioral Decision Making 16, 309-330 (2003).

 

Tuesday
Jan312012

The Goldilocks "theory" of public opinion on climate change

We often are told that "dire news" on climate change provokes dissonance-driven resistance.

Yet many commentators who credit  this account also warn us not to raise public hopes by even engaging in research on -- much less discussion of -- the feasibility of geoengeineering. These analysts worry that any intimation that there's a technological "fix" for global warming will lull the public into a sense of false security, dissipating political resolve to clamp down on CO2 emissions.

So one might infer that what's needed is a "Goldilocks strategy" of science communication -- one that conveys neither too much alarm nor too little but instead evokes just the right mix of fear and hope to coax the democratic process into rational engagement with the facts.

Or one might infer that what's needed is a better theory--or simply a real theory--of public opinion on climate change.

Here's a possibility: individuals form perceptions of risk that reflect their cultural commitments.

Here's what that theory implies about "dire" and "hopeful" information on climate change: what impact it has will be conditional on what response -- fear or hope, reasoned consideration or dismissiveness-- best expresses the particular cultural commitments individuals happen to have.

And finally here's some evidence from an actual empirical test conducted (with both US & UK samples) to test this conjecture:  

  • When individuals are furnished with a "dire" message -- that substantial increases in CO2 emissions are essential to avert catastrophic effects for the environment and human well-being -- they don't react uniformly.  Hierarchical individualists, who have strong pro-commerce and pro-technology values, do become more dismissive of scientific evidence relating to climate change. However, egalitarian communitarians, who view commerce and industry as sources of unjust social disparities, react to the same information by crediting that evidence even more forcefully.
     
  • Likewise, individuals don't react uniformly when furnished "hopeful" information about the contribution that geoengineering might make to mitigating the consequences of climate change. Egalitarian communitarians — the ones who ordinarily are most worried — do become less inclined to credit scientific information that climate change is such a serious problem after all. But when given the same information about geoengineering, the normally skeptical hierarchical individualists respond by crediting such scientific information more.

Am I saying that this account is conclusively established & unassailably right, that everything else one might say in addition or instead is wrong, and that therefore this, that, or the other thing ineluctably follows about what to do and how to do it? No, at least not at the moment.

The only point, for now, is about Goldilocks. When you see her, watch out.

Decision science has supplied us with a rich inventory of mechanisms. Afforded complete freedom to pick and choose among them,  any analyst with even a modicum of imagination can explain pretty much any observed pattern in risk perception however he or she chooses and thus invest whatever communication strategy strikes his or her fancy with a patina of "empirical" support.

One of the ways to prevent being taken in by this type of faux explanation is to be very skeptical about Goldilocks. Her appearance -- the need to engage in ad hoc "fine tuning" to fit a theory to seemingly disparate observations -- is usually a sign that someone doesn't actually have a valid theory and is instead abusing decision science by mining it for tropes to construct just-so stories motivated (consciously or otherwise) by some extrinsic commitment.

The account I gave of how members of the public react to information about climate change risks didn't involve adjusting one dial up and another down to try to account for multiple off-setting effects.

That's because it showed there really aren't offsetting effects here. There's only one: the crediting of  information in proportion to its congeniality to cultural predispositions. 

The account is open to empirical challenge, certainly.  But that's exactly the problem with Goldilocks theorizing: with it anything can be explained, and thus no conclusion deduced from it can be refuted.