follow CCP

Recent blog entries
Wednesday
Feb222012

Climate change & the media: what's the story? (Answer: expressive rationality)

Max Boykoff has written a cool book (material from which played a major role in a panel session at the 2012 Ocean Sciences conference) examining media coverage of climate change in the U.S. 

Who Speaks for the Climate? documents in a more rigorous and informative way than anything I've ever read the conservation of "balance" in the media coverage of the climate change debate no matter how lopsided the scientific evidence becomes.

Boykoff's own take -- and that of pretty much everyone I've heard comment on this phenomenon -- is negative: there is something wrong w/ norms of science journalism or the media generally if scientifically weak arguments are given just as much space & otherwise treated just as seriously as strong ones.

I have a slightly different view: "balanced" coverage is evidence of the expressive rationality of public opinion on climate change.

News media don't have complete freedom to cover whatever they want, however they want to. Newspapers and other news-reporting entities are commercial enterprises. To survive, they must cover the stories that people want to read about.

What people want to read are stories containing information relevant to their personal lives. Accordingly, one can expect newspapers to cover the aspect of the "climate change story" that is most consequential for the well-being of their individual readers.

The aspect of the climate change story that's most consequential for ordinary members of the public is that there's a bitter, persistent, culturally polarized debate over it. Knowing that has a much bigger impact on ordinary individuals than knowing what the science is.

Nothing an individual thinks about climate change will affect the level of risk that climate change poses for him or her. That individual's behavior as consumer, voter, public discussant, etc., is just too small to have any impact --either on how carbon emissions affect the environment or on what governments do in response. 

However, the position an individual takes on climate change can have a huge impact on that' person's individual social standing within within his or her community.  A university professor in New Haven CT or Cambridge Mass. will be derisively laughed at and then shunned if he or she starts marching around campus with a sign saying "climate change is a hoax!" Same goes for someone in a mirror image hierarchical-individualistic community (say, a tobacco farmer living somewhere in South Carolina's 4th congressional district) who insists to his  friends & neighbors, "no, really, I've looked closely at the science -- the ice caps are melting because of what human beings are doing to the environment." 

In other words, it's costless for ordinary individuals to take a positon that is at odds with climate science, but costly to take one that has a culturally hostile meaning within groups whose support (material, emotional & otherwise) they depend on.

Predictably, then, individuals tend to pay a lot of attention to whatever cues are out there that can help them identify what cultural meanings (if any) a disputed risk or related fact issue conveys, and to expend a lot of cognitive effort (much of it nonconscious) to form beliefs that avoid estranging them their communities.

Predictably, too, the media, being responsive to market forces, will devote a lot more time and effort to reporting information that is relevant to identifying the cultural meaning of climate change than to information relevant to determining the weight or the details of scientific evidence on this issue.

So my take on Boykoff's evidence is different from his.

But it is still negative.

It might be individually rational for people to fit their perceptions of climate change and other societal risks to the positions that predominate in their communities but it is nevertheless collectively irrational for them all to form their beliefs this way simultaneously: the more impelled culturally diverse individuals are to form group-congruent beliefs rather than truth-congruent ones, the less likely democratic institutions are to form policies that succeed in securing their common welfare.

The answer, however, isn't to try to change the norms of the media. They will inevitably cover the story that matters to us.

What we need to do, then, is change the story on climate change. We need to create new meanings for climate change that liberate science from the antagonistic ones  that now make taking the "wrong" position (any position) tantamount to cultural treason.

Tuesday
Feb212012

Ocean Science Meeting science communication panel

Here's where I am (or will be in few hrs).  

Plan to say (1) there is a science of science communication; (2) it has assembled a good deal of data on why the public is divided on climate change; (3) what that data show is that the explanation is neither lack of scientific knowledge nor the inability to engage scientific information in a rational or systematic fashion ("system 2" etc); (4) what does explain conflict is motivated reasoning (cultural & otherwise); and (5) dispelling the conflict requires communication strategies that are responsive to this dynamic.

More later!

 

Monday
Feb202012

Could geoengineering cool the climate change debate?

Geoengineering (according to the National Academy of Sciences) “refers to deliberate, large-scale manipulations of Earth’s environment designed to offset some of the harmful consequences of [greenhouse-gas induced] climate change.” But what impact might the advent of this emerging technology have on the science-communication environment in which the public makes sense of the evidence for climate change and its significance?

Geoengineering is still very much at the drawing board stage, but the sketches of what it might look like—from solar-reflective nanotechnology flying saucers to floating mist-emitting “cloud whiteners”—are pretty amazing.

The U.S. National Academy of Sciences and the Royal Society in the U.K. are among the preeminent scientific authorities that have called for stepped up research efforts to develop geoengineering—and to assess the risks that it might itself pose to the physical environment.

Also very much in need of research (and getting it from an expert UK team that includes Nick Pidgeon) are the science-communication challenges that geoengineering is likely to confront.

Indeed, anxiety over the impact that geoengineering could have on public opinion is now putting research into the underlying science at risk. 

All the issues surrounding geoengineering, including the ethical ones, obviously demand open public deliberation.

But critics oppose even permitting research to begin lest it lull the public into a state of false security that will enervate any support for carbon emission limits—a dynamic labeled (mislabeled really, given the well-established and familiar technical meaning of the term in economics) the “moral hazard” effect.  

Political resistance fueled by this argument resulted in postponement of a very rudimentary scientific experiment (one involving the operation of a high-pressure water hose attached to a helium balloon) that was supposed  to be conducted by scientists at Cambridge University last fall.

CCP recently conducted a study to see what impact geoengineering might have on the science-communication environment. We found no support for the “moral hazard” hypothesis.  Indeed, the study, which was conducted with both US and UK subjects, found that geoengineering might well improve the quality of public deliberations by reducing cultural polarization over climate change science.

The study involved an experiment in which subjects assessed a scientific study on climate change. The study (a composite of two, which appeared in Nature and Proceedings of the National Academies of Sciences) reported researchers’ conclusion that previous projections of carbon dissipation had been too optimistic and that significant environmental harm could be anticipated no matter how much carbon emissions were reduced in the future.

The subjects, all of whom read the dissipation study, were divided into three groups, each of which was assigned to read a different mock newspaper article. Subjects in the “anti-pollution” condition read an article that reported the recommendation of scientists for even stricter CO2 limits. Subjects in the “geoengineering condition” read an article that reported the recommendation of scientists for research on geoengineering, on which the article also supplied background information.

Finally, a “control condition” group read an article about a municipality’s decision to require construction companies to post bonds for the erection of traffic signals in housing developments.

Logically speaking, what one proposes to do about climate change (implement stricter carbon emission limits, investigate geoengineering, or even put up more traffic signals) has no bearing on the validity of a scientific study that purports to find that climate change is a more serious problem than previously had been understood.

But psychologically one might expect which newspaper article subjects read to make a difference. The “moral hazard” argument, for example, posits that information about geoengineering will induce members of the public to discount the seriousness of the threat that climate change poses.

That’s not what we found, however. Indeed, contrary to the “moral hazard” hypothesis, subjects in the geoengineering were slightly more concerned than ones in the anti-pollution and control conditions.

We also found that the experimental assignment affected how culturally polarized the study subjects (in both countries) were. The subjects in the anti-pollution condition were the most polarized over the validity of study (whether computer models are reliable, whether the researchers were biased, etc.); subjects in the geoengineering condition were the least.

 We had hypothesized this pattern based on cultural cognition research.

That research shows that individuals tend to form perceptions of risk that fit their values. Thus, egalitarian communitarians, who are morally suspicious of commerce and industry, find it congenial to believe those activities are dangerous and thus worthy of regulation. Hierarchical individualists, in contrast, tend to be dismissive of environmental risk claims, including climate change, because they value commerce and industry and perceive (unconsciously) that such claims will result in their being restricted.

These meanings were reinforced by the newspaper article in the anti-pollution condition, resulting in the two groups becoming even more divided in that condition on the validity of the carbon-dissipation study.

But the information on geoengineering, we posited, would dissipate the usual cultural meanings associated with climate change science. Because it shows that there are policy responses aside from restricting commerce and industry, information on geoengineering reduces the threat that evidence of climate change poses to hierarchical individualist sensibilities and thus the psychic incentive to dismiss that evidence out of hand.

This conjecture was the basis for predicting the depolarization effect actually observed in the geoengineering condition.

What’s the upshot?

Well, certainly not that geoengineering should be embraced as a policy solution to climate change. Whether that’s a good idea depends on the sort of research that the Royal Society and National Academy of Sciences have proposed.

Moreover, although this study furnishes evidence that engaging in that sort of research—and inviting public discussion of its implications—will actually improve the science communication environment, rather than harm it as the “moral hazard” position asserts, that proposition, too, certainly merits further research.

But the one conclusion I think can be made without qualification is that claims about the impact of scientific research on public risk perceptions, just like ones about the impact of human activity on the environment, admit of scientific investigation. 

When predictions of adverse public reactions are not only advanced without any supporting evidence but also asserted as decisive reason to block scientific inquiry, there should be little doubt that those making them lack a genuine commitment to the principles of science.

References:

Allen, M.R., et al. Warming caused by cumulative carbon emissions towards the trillionth tonne. Nature458, 1163-1166 (2009).

Corner, A. & Pidgeon, N. Geoengineering the Climate: The Social and Ethical Implications. Environment: Science and Policy for Sustainable Development 52, 24-37 (2010).

Hamilton, C. Ethical Anxieties About Geoengineering: Moral hazard, slippery slope and playing God.  (unpublished, Sept. 27, 2011).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012), pp. 725-60.

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92 (Jan. 9, 2012). 

National Research Council. Advancing the Science of Climate Change, (The National Academies Press, 2010).

National Research Council. America's Climate Choices, (The National Academies Press, 2011).

Parkhill, K. & Pidgeon, N. Public Engagement on Geoengineering Research: Preliminary Report on the SPICE Deliberative Workshops, Understanding Risk Working Paper 11-01 (Understanding Risk Research Group, Cardiff University, June 2011).

Parson, E. Reflections on Air Capture: the political economy of active intervention in the global environment. Climatic Change 74, 5-15 (2006).

Royal Society. Geoengineering the climate: science, governance and uncertainty, (Royal Society, London, 2009).

Solomon, S., Plattner, G.-K., Knutti, R. & Friedlingstein, P. Irreversible climate change due to carbon dioxide emissions. Proceedings of the National Academy of Sciences 106, 1704-1709 (2009).

Time to act. Nature 458, 1077-1078 (2009).


Saturday
Feb182012

Report from Garrison Institute Climate Change conference: the good & not so good...

As noted previously, I attended the Garrison Institute meeting on Climate, Mind and Behavior.

On positive side, the highlight, in my view, was very interesting presentation by George Marshall.

George Marshall: He gets science communication!Marshall, a man of apparently unbounded curiosity, creativity, and public spirit, is organizing a set of related initiatives aimed at improving climate-change science communication. 

One of these is http://talkingclimate.org/, essentially a mega-wearhousing facility for collecting, organizing, & promoting transmission of empirical studies on communication.

Another is a research project aimed at production of effective targeted messaging. Marshall outlined a research protocol that is, in my view, just what's needed because it focuses on fine grained matching of cultural meanings to the diverse information-processing dispositions that exist in the public. It uses empirical measurement at every stage -- from development of materials, to lab testing, to follow-up work in field in collaboration with professional communicators.

This is exactly the systematic approach that tends to be missing from climate change science communication, which is dominated by impressionistic throw-everything-against-the-wall-but-don't-bother-measuring-what-sticks strategy...  Marshall offered a devastating (and devastatingly funny) analysis of that. 

I look forward to the distribution of the video of his talk (the organizers were filming all the presentations).

On downside:

1.  Goldilocks was also there. Lots of just-so story telling -- "engage emotions ... but don't scare or numb" -- based on ad hoc mix and match of general psychological mechanisms w/o evidence on how they play out in this context (indeed, in disregard of the evidence that actually exists). The antithesis, really, of the careful, deliberate, fine-grained, and genuinely empirical approach that Marshall's protocol embodied. Sigh...

2. I was also genuinely shocked & saddened by what struck (assaulted) me as the anti-science ethos shared by a large number of participants.  

Multiple speakers disparaged science for being "materialistic" and for trying to "put a number on everything." One, to approving nods of audience, reported that university science instruction had lost the power to inspire "wonder" in students because it was disconnected from "spiritual" (religious, essentially) sensibilities.  

For anyone who is inclined to buy that, I strongly recommend watching The Relation of Mathematics to Physics, Lecture 2 of Richard Feynman's 1964 Messenger Lectures on the Character of Physical Law!

Actually, I think it is a huge problem in our culture that we don't make it as easy for people who have a religious outlook and love science (there are many of them!) as it is for those who have a more secular outlook & love it to participate in the thrill and wonder of knowing about what we know about nature.

But that problem is one rooted in an imperfect realization of the Liberal ideal of making all the resources of a good society (including access to its immense and inspiring knowledge of nature!) available to all citizens irrespective of their cultural worldviews or moral/political outlooks.

Those who ridicule science for being insufficiently "spiritual" or for being excessively "materialistic" etc. are engaged in a form of illiberal discourse.  They are entitled to pursue their own vision of the best way to live but should show respect -- when engaged in civic deliberations -- for those who see virtue and excellence in other aspects of the human experience.

That these anti-liberals happen to be concerned about climate change does not excuse their cultural intolerance.

Thursday
Feb162012

Slides from Garrison Institute talk

Gave talk today on "Climate Change and the Science Communication Problem" at Garrison Institute's Climate, Mind and Behavior Initiative.  Basic gist -- "it's cultural cognition, not deficiencies in rationality, so communicate meaning and not just content" -- is clear from the slides, which are here.

 

 

Wednesday
Feb152012

Scientists of science communication: Profiles #1 & #2

There is no invisible hand that guides valid scientific knowledge into the beliefs of ordinary citizens whose lives it could improve.

If simple logic doesn't make that clear, then historical experience ceratinly does -- from the public's rejection of "expert consensus" on deep geologic isolation of nuclear wastes to the massive backlash today against the CDC's proposal for universal vaccination of girls against HPV (just to name a couple that come to mind).

The emerging science of science of science communication uses scientific methods (drawn from a variety of disciplines) to identify the processes that enable nonexperts to recognize valid scientific knowledge, the dynamics that predictably disrupt those processes, and the steps that can be taken to preempt those dynamics or to reverse them when they are not successfully averted.

I will post now & again (very brief) profiles of scholars who are doing important work in this high interdisciplinary field.

One explanatory note, though: after the first entry, the profiles will not be based on any assessment on my part of the contribution the individual has made to the science of science communication. Pretty much going to list in random-ass order ones that I happen to think of at the time!

1. Paul Slovic. Slovic invented the field of public risk perceptions with his pioneering work on the "psychometric paradigm" in the late 1980s (e.g., Slovic, P. Perception of risk. Science 236, 280-285,  (1987)) and is the scholar whose work in the last decade crystallized the "affect heuristic," which identifies the decisive role of emotional perception as the faculty of cognition most consequential to the formation of lay perceptions of risk (e.g., Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004)). Through his teaching and collaborations, moreover, he is also contributed immeasurably to the ability of countless other scholars to contribute to the advancement of knowledge in the risk perception and communication field (just as math has its Erdös number, so the field of public risk perception as its Slovic number!).  Many of his key works (not all; it would take a library to assemble them) can be found in two collections: Slovic, P. The Perception of Risk, (Earthscan Publications, London ; Sterling, VA, 2000)Slovic, P. The feeling of risk : new perspectives on risk perception, (Earthscan, London ; Washington, DC, 2010).

 2. James N. Druckman.  Druckman, the Payson S. Wild Professor of Political Science and Faculty Fellow at the Institute for Policy Research at Northwestern University, is, to my mind, a great model of what a genuine science of science communication looks like. An editor of Public Opinion Quarterly. He is a first-rate-- world-class even -- political scientist, who has done immensely work on framing (e.g., Druckman, J.N. Political Preference Formation: Competition, Deliberation, and the (Ir)relevance of Framing Effects. American Political Science Review 98, 671-686 (2004)). At the same time, he has turned his attention systematically to the way in which political economy and political psychology interact with (and can distinctively distort) societal dissemination of scientific information (e.g. Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011)). What's more, he doesn't just grab recognized mechanisms (one he is worked on or is simply familiar with from the general political psychology literature) and use them as a story-telling simulacrum of explanation; he conjectures and tests with actual science communication phenomena.  We need more Druckmans: people whoare not only great social scientists but who get that there is a distinctive set of processes affecting the dissemination of policy-relevant science and who are genuinely involved in empirically studying them. 

Tuesday
Feb142012

The ideological symmetry of motivated reasoning, round 15

Okay, so Chris Mooney decides to get me in a place where he can swat me down like an annoying flea buzzing in his ear on this "asymmetry question." As I was in a big hole in terms of arguments & evidence, I had to resort to chicanery: by personally displaying more motivated reasoning than anyone would have thought humanly possible during a 30-minute period, I managed to demonstrate to the satisfaction of all objective observers that this barrier to open-minded consideration of evidence is not confined to conservatives.


Friday
Feb102012

Whoa, slow down: public conflict over climate change is more complicated than "thinking fast, slow"

With the (deserved) popularity of Kahneman's accessible and fun synthesis "Thinking Fast and Slow" has come a (predictable) proliferation of popular commentaries attributing public dissensus over climate change to Kahneman's particular conceptualization of dual process reasoning.

Scientists, the argument goes, determine risk using the tools and habits of mind associated with "slow," System 2 thinking, which puts a premium on conscious reflection.

Lacking the time and technical acumen to make sense of complicated technical information, ordindary citizens (it's said) use visceral, affect-driven associations--system 1. Well, climate change provokes images -- melting ice, swimming polar bears -- that just aren't as compelling, as scary as, say, terrorism (fiery skyscrapers with the ends of planes sticking out of them, etc.). Accordingly, they underestimate the risks of climate change relative to a host of more gripping threats to health and safety that scientific assessment reveals to be smaller in magnitude. 

This is not a new argument. Scholars on risk perception have been advancing it for years (and reiterating/amplifying it as time passes).

The problem is that it is wrong.  Empirically demonstrably false.

Consider: 

  • Variance in the disposition to use "fast" (heuristic, affect-driven, system 1) as opposed to "slow" (conscious, reflective, deliberate system 2) modes of reasoning explains essentially none of the variance in public perception of climate change risks. In fact, when one correlates climate change risk perceptions with these dispositions, one finds that the tendency to rely on system 2 (slow) rather than 1 (fast) is associated with less concern, but the impact is so small as to be practically irrelevant. 

  • What does explain variance in climate change risk perception -- evidence shows, and has for years -- are cultural or ideological dispositions. There is a huge gulf between citizens subscribing to a hierarchical and individualistic worldview, who attach high symbolic and material value to commerce and industry and who discount all manner of environmental and technological risk, and citizens subscribing to an egalitarian and communitarian worldview, who associate commerce and industry with unjust social disparities.

 

  • Because climate change divides members of the public on cultural grounds, it must be the case that ordinary individuals who use system 1 ("fast") modes of reasoning form opposing intuitive or affective reactions to climate change -- "scary" for egalitarians and communitarians, "enh" for hierarchical individualists. Again, evidence bears this out! (Ellen Peters, a psychologist who studies the contribution that affect, numeracy, and cultural worldviews make to risk perception has done the best study on how cultural worldviews orient system 1/affective perceptions of risk, in my view.)

  • Individuals who are disposed to use system 2 ("slow") are not more likely to hold beliefs in line with the scientific consensus on climate change. Instead, they are even more culturally polarized than individuals who are more disposed to use "fast," system 1 reasoning. This is a reflection of the (long-established but recently forgotten) impact of motivated reasoning on system 2 forms of reasoning (i.e., conscious, deliberate, reflective forms). 

 

So why do so many commentators keep attributing the climate change controversy to system 1/2 or "fast/slow"?

The answer is  system 1/2 or "fast/slow": that framework recommends itself -- is intuitively and emotionally appealing (especially to people frustrated over the failure of scientific consensus to make greater inroads in generating public consensus) and ultimately a lot easier to get than the empirically supported findings.

This is in fact part of the explanation for the "story telling" abuse of decision science mechanisms that I discussed in an earlier post.

There's only one remedy for that: genuinely scientific thinking.

Just as we are destined not to solve the problems associated with climate change without availing ourselves of the best available science on how the climate works, so we are destined to continue floundering in addressing the pathologies that generate public dissensus over climate change and a host of other issues unless we attend in a systematic, reflective, deliberate way to the science of science communication.

Monday
Feb062012

Do people with higher levels of "science aptitude" see more risk -- or less -- in climate change?

The answer — as it was for “do more educated people see more risk or less”—is neither. Until one takes their cultural values into account.

The data were collected in a survey (the same one discussed in the earlier post) of 1500 US adults drawn from a nationally representative panel. My colleagues and I measured the subjects’ climate change risk perceptions with the “Industrial Strength Measure.”

We also had them complete two tests: one developed by the National Science Foundation to measure science literacy; and another used by psychologists to measure “numeracy,” which is the capacity to engage in technical reasoning (what Kahneman calls “System 2”). Responses to these two tests form a psychometrically valid and reliable scale that measures a single disposition, one that I’m calling “science aptitude” here.

As we report in a working paper, science aptitude (and each component of it of it) is negatively correlated with climate change risk perceptions—i.e., as science literacy and numeracy go up, concern with climate change goes down. But by an utterly trivial amount (r = 0.09) that no one could view as practically significant—much less as a meaningful explanation for public conflict over climate change risks.

A reporter asked me to try to make this more digestible by computing the number of science-aptitude questions (out of 22 total) that were answered correctly (on average) by individuals who were less concerned with climate change risks and by those who were more concerned. The answer is: 12.6 vs. 12.3, respectively. Still a trivial difference.

But as we make clear in the working paper, the inert effect of science literacy and numeracy when the sample is considered as a whole obscures the impact that science aptitude actually does have on climate change risks when subjects are assessed as members of opposing cultural groups.

Egalitarian communitarians—the individuals who are most concerned about climate change in general—become more concerned as they become more science literate and numerate. In contrast, hierarchical individualists—the individuals who are least concerned in general—become even less concerned.

The result is that cultural polarization, which is already substantial among people low in science aptitude, grows even more pronounced among individuals who are high in science aptitude.

Or to put it another way, knowing more science and thinking more scientifically doesn’t induce citizens to see things the way climate change scientists do. Instead, it just makes them more reliable indicators of what people with their values think about climate change generally.

This doesn’t mean that science literacy or numeracy causes conflict over climate change. The antagonistic cultural meanings in climate change communication do.

But because antagonistic cultural meanings are the source of the climate-change-debate pathology, just administering greater and greater does of scientifically valid information can't be expected to cure it.

We don’t need more information. We need better meanings.

Sunday
Feb052012

Cultural consensus worth protecting: robots are cool!

Just a couple of yrs ago there was concern that artificial intelligence & robotics might become the next front for the "culture war of fact" in US.

Well, good news: Everyone loves robots! Liberals & conservatives, men & women (the latter apparently not as much, though), rich & poor, dogs & cats!

We all know that the Japanese feel this way, but now some hard evidence -- a very rigorous poll conducted by Sodahead on-line research -- that there is a universal warm and fuzzy feeling toward robots in the US too.

This is, of course, in marked contrast to the cultural polarization we see in our society over climate change, and is thus a phenomenon worthy of intense study by scholars of risk perception.

But the contrast is not merely of academic interest: the reservoir of affection for robots is a kind of national resource -- an insurance policy in case the deep political divisions over climate change persist.

If they do, then of course we will likely all die, either from the failure to stave off climate-change induced environmental catastrophe or from some unconsidered and perverse policy response to try to stave off catastrophe.

And at that point, it will be up to the artificially intelligent robots to carry on.

You might think this is a made up issue. It's not. Even now, there are misguided people trying to sow the seeds of division on AI & robots, for what perverse, evil reason one can only try to imagine.

We have learned a lot about science communication from the climate change debacle. Whether we'll be able to use it to cure the science-communication pathology afflicting deliberations over climate change is an open question.  But we can and should at least apply all the knowledge that studying this impasse has generated to avoid the spread of this disease to future science-and-technology issues. 

And I for one can't think of an emerging technology more important to insulate from this form of destructive and mindless fate than artificial intelligence & robotics!

******

 

disclaimer: I love robots!! So much!!!
Maybe that is unconsciously skewing my assessment of the issues here (I doubt it, but I did want to mention).

Friday
Feb032012

Two common (& recent) mistakes about dual process reasoning & cognitive bias

"Dual process" theories of reasoning -- which have been around for a long time in social psychology -- posit (for the sake of forming and testing hypotheses; positing for any other purpose is obnoxious) that there is an important distinction between two types of mental operations.

Very generally, one of these involves largely unconscious, intuitive reasoning and the other conscious, reflective reasoning.

Kahneman calls these "System 1" and "System 2," respectively, but as I said the distinction is of long standing, and earlier dual process theories used different labels (I myself like "heuristic" and "systematic,” the terms used by Shelley Chaiken and her collaborators; the “elaboration likelihood model” of Petty & Cacioppo uses different labels but is very similar to Chaiken’s “heuristic-systematic Model”).

Kahneman's work (including most recently his insightful and fun synthesis “Thinking Fast, Slow”) has done a lot to focus attention on dual process theory, both in scholarly research (particularly in economics, law, public policy & other fields not traditionally frequented by social psychologists) and in public discussion generally.

Still, there are recurring themes in works that use Kahneman’s framework that reflect misapprehensions that familiarity with the earlier work in dual process theorizing would have steered people away from.

I'm not saying that Kahneman — a true intellectual giant — makes these mistakes himself or that it is his fault others are making them. I'm just saying that it is the case that these mistakes get made, with depressing frequency, by those who have come to dual process theory solely through the Kahneman System 1-2 framework.

Here are two of those mistakes (there are more but these are the ones bugging me right now).

1. The association of motivated cognition with "system 1" reasoning.  

"Motivated cognition," which is enjoying a surge in interest recently (particularly in connection with disputes over climate change), refers to the conforming of various types of reasoning (and even perception) to some goal or interest extrinsic to that of reaching an accurate conclusion.  Motivated cognition is an unconscious process; people don't deliberately fit their interpretation of arguments or their search for information to their political allegiances, etc. -- this happens to them without their knowing, and often contrary to aims they consciously embrace and want to guide their thinking and acting.

The mistake is to think that because motivated cognition is unconscious, it affects only intuitive, affective, heuristic or "fast" "System 1" reasoning. That's just false. Conscious, deliberative, systematic, "slow" "System 2" can be affected be affected as well. That is, commitment to some extrinsic end or goal -- like one's connection to a cultural or political or other affinity group -- can unconsciously bias the way in which people consciously interpret and reason about arguments, empirical evidence and the like.

This was one of the things that Chaiken and her collaborators established a long time ago. Motivated systematic reasoning continues to be featured in social psychology work (including studies associated with cultural cognition) today.

One way to understand this earlier and ongoing work is that where motivated reasoning is in play, people will predictably condition the degree of effortful mental processing on its contribution to some extrinsic goal. So if relatively effortless heuristic reasoning generates the result that is congenial to the extrinsic goal or interest, one will go no further. But if it doesn't -- if the answer one arrives at from a quick, impressionistic engagement with information frustrates that goal -- then one will step up one's mental effort, employing systematic (Kahneman's "System 2") reasoning.

But employing it for the sake of getting the answer that satisfies the extrinsic goal or interest (like affirmation of one's cultural identity defining group). As a result, the use of systematic or "System 2" reasoning will thus be biased, inaccurate.

But whatever: Motivated cognition is not a form of or a consequence of "system 1" reasoning. If you had been thinking & saying that, stop. 

2.  Equation of unconscious reasoning with "irrational" or biased reasoning, and equation of conscious with rational, unbiased.

The last error is included in this one, but this one is more general.

Expositors of Kahneman tend to describe "System 1" as "error prone" and "System 2" as "reliable" etc.

This leads lots of people to think that that heuristic or unconscious reasoning processes are irrational or at least "pre-rational" substitutes for conscious "rational" reasoning. System 1 might not always be biased or always result in error but it is where biases, which, on this view, are essentially otherwise benign or even useful heuristics that take a malignant turn, occur. System 2 doesn't use heuristics -- it thinks things through deductively, algorithmically  -- and so "corrects" any bias associated with heuristic, System 1 reasoning.

Wrong. Just wrong. 

Indeed, this view is not only wrong, but just plain incoherent.

There is nothing that makes it onto the screen of "conscious" thought that wasn't (moments earlier!) unconsciously yanked out of the stream of unconscious mental phenomena. 

Accordingly, if a person's conscious processing of information is unbiased or rational, that can only be because that person's unconscious processing was working in a rational and unbiased way -- in guiding him or her to attend to relevant information, e.g., and to use the sort of conscious process of reasoning (like logical deduction) that makes proper sense of it.

But the point is: This is old news! It simply would not have occurred to anyone who learned about the dual process theory from the earlier work to think that unconscious, heuristic, perceptive or intuitive forms of cognition are where "bias" come from, and that conscious, reflective, systematic reasoning is where "unbiased" thinking lives.

The original dual process theorizing conceives of the two forms of reasoning as integrated and mutually supportive, not as discrete and hierarchical. It tries to identify how the entire system works -- and why it sometimes doesn't, which is why you get bias, which then, rather than being "corrected" by systematic (System 2) reasoning, distorts it as well (see motivated systematic reasoning, per above).

Even today, the most interesting stuff (in my view) that is being done on the contribution that unconscious processes like "affect" or emotion make to reasoning uses the integrative, mutually supportive conceptualization associated with the earlier work rather than the discrete, hierarchical conceptualization associated (maybe misassociated; I'm not talking about Kahneman himself) with System 1/2.

Ellen Peters, e.g., has done work showing that people who are high in numeracy -- and who thus posses the capacity and disposition to use systematic (System 2) reasoning -- don't draw less on affective reasoning (System 1...) when they outperform people who are low in spotting positive-return opportunities. 

On the contrary, they use more affect, and more reliably.

In effect, their unconscious affective response (positive or negative) is what tells them that a "good deal" — or a raw one — might well be at hand, thus triggering the use of the conscious thought needed to figure out what course of action will in fact conduce to the person's well-being.

People who aren't good with numbers respond to these same situations in an affectively flat way, and as a result don't bother to engage them systematically.

This is evidence that the two processes are not discrete and hierarchical but rather are integrated and mutually supportive.  Greater capacity for systematic (okay, okay, "system 2"!) reasoning over time calibrates heuristic or affective processes (system 1), which thereafter, unconsciously but reliably, turns on systematic reasoning.

So: if you had been thinking or talking as if  System 1 equaled "bias" and System 2 "unbiased, rational," please just stop now.

Indeed, to help you stop, I will use a strategy founded in the original dual process work.

As I indicated, believing that consciousness leaps into being without any contribution of unconsciousness is just incoherent. It is like believing in "spontaneous generation."  

Because the idea that System 2 reasoning can correct unconscious bias without the prior assistance of unconscious, system 1 reasoning is illogical, I propose to call this view "System 2 ab initio bias.”

The effort it will take, systematically,  to figure out why this is an appropraite thing for someone to accuse you of if you make this error will calibrate your emotions: you'll come to be a bit miffed when you see examples; and you'll develop a distinctive (heuristic) aversion to becoming someone who makes this mistakes and gets stigmatized with a humiliating label.

And voila! -- you'll be as smart (not really; but even half would be great!) as Shelly Chaiken, Ellen Peters, et al. in no time!

References:

Chaiken, S. & Maheswaran, D. Heuristic Processing Can Bias Systematic Processing - Effects of Source Credibility, Argument Ambiguity, and Task Importance on Attitude Judgment. Journal of Personality and Social Psychology 66, 460-473 (1994).

Chaiken, S. & Trope, Y. Dual-process theories in social psychology, (Guilford Press, New York, 1999).

Chen, S., Duckworth, K. & Chaiken, S. Motivated Heuristic and Systematic Processing. Psychol Inq 10, 44-49 (1999).

Dunning, E.B.a.D. See What You Want to See: Motivational Influences on Visual Perception. Journal of Personality and Social Psychology 91, 612-625 (2006).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing Under Defense Motivation. Pers Soc Psychol B 23, 84-97 (1997).

Hsee, C.K. Elastic Justification: How Unjustifiable Factors Influence Judgments. Organ Behav Hum Dec 66, 122-129 (1996).

Kahan, D.M. The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1 (2011). 

Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Kahneman, D. Thinking, fast and slow, (Farrar, Straus and Giroux, New York, 2011).

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93, 1449-1475 (2003).

Kunda, Z. The Case for Motivated Reasoning. Psychological Bulletin 108, 480-498 (1990).

Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

Peters, E., Slovic, P. & Gregory, R. The role of affect in the WTA/WTP disparity. Journal of Behavioral Decision Making 16, 309-330 (2003).

 

Tuesday
Jan312012

The Goldilocks "theory" of public opinion on climate change

We often are told that "dire news" on climate change provokes dissonance-driven resistance.

Yet many commentators who credit  this account also warn us not to raise public hopes by even engaging in research on -- much less discussion of -- the feasibility of geoengeineering. These analysts worry that any intimation that there's a technological "fix" for global warming will lull the public into a sense of false security, dissipating political resolve to clamp down on CO2 emissions.

So one might infer that what's needed is a "Goldilocks strategy" of science communication -- one that conveys neither too much alarm nor too little but instead evokes just the right mix of fear and hope to coax the democratic process into rational engagement with the facts.

Or one might infer that what's needed is a better theory--or simply a real theory--of public opinion on climate change.

Here's a possibility: individuals form perceptions of risk that reflect their cultural commitments.

Here's what that theory implies about "dire" and "hopeful" information on climate change: what impact it has will be conditional on what response -- fear or hope, reasoned consideration or dismissiveness-- best expresses the particular cultural commitments individuals happen to have.

And finally here's some evidence from an actual empirical test conducted (with both US & UK samples) to test this conjecture:  

  • When individuals are furnished with a "dire" message -- that substantial increases in CO2 emissions are essential to avert catastrophic effects for the environment and human well-being -- they don't react uniformly.  Hierarchical individualists, who have strong pro-commerce and pro-technology values, do become more dismissive of scientific evidence relating to climate change. However, egalitarian communitarians, who view commerce and industry as sources of unjust social disparities, react to the same information by crediting that evidence even more forcefully.
     
  • Likewise, individuals don't react uniformly when furnished "hopeful" information about the contribution that geoengineering might make to mitigating the consequences of climate change. Egalitarian communitarians — the ones who ordinarily are most worried — do become less inclined to credit scientific information that climate change is such a serious problem after all. But when given the same information about geoengineering, the normally skeptical hierarchical individualists respond by crediting such scientific information more.

Am I saying that this account is conclusively established & unassailably right, that everything else one might say in addition or instead is wrong, and that therefore this, that, or the other thing ineluctably follows about what to do and how to do it? No, at least not at the moment.

The only point, for now, is about Goldilocks. When you see her, watch out.

Decision science has supplied us with a rich inventory of mechanisms. Afforded complete freedom to pick and choose among them,  any analyst with even a modicum of imagination can explain pretty much any observed pattern in risk perception however he or she chooses and thus invest whatever communication strategy strikes his or her fancy with a patina of "empirical" support.

One of the ways to prevent being taken in by this type of faux explanation is to be very skeptical about Goldilocks. Her appearance -- the need to engage in ad hoc "fine tuning" to fit a theory to seemingly disparate observations -- is usually a sign that someone doesn't actually have a valid theory and is instead abusing decision science by mining it for tropes to construct just-so stories motivated (consciously or otherwise) by some extrinsic commitment.

The account I gave of how members of the public react to information about climate change risks didn't involve adjusting one dial up and another down to try to account for multiple off-setting effects.

That's because it showed there really aren't offsetting effects here. There's only one: the crediting of  information in proportion to its congeniality to cultural predispositions. 

The account is open to empirical challenge, certainly.  But that's exactly the problem with Goldilocks theorizing: with it anything can be explained, and thus no conclusion deduced from it can be refuted.

 

Monday
Jan302012

More politics, pepper spray & cognition 

Over at Volokh Conspiracy there's an interactive poll that lets readers watch a video of the police tasing a D.C. Occupy protestor & then indicate whether the police were acting appropriately. The comments are great demonstration of how people with different ideological predispositions will actually see different things in a situation like this, a recurring phenomenon in the reactions to use of force by police against Occupy protestors. I'm pretty sure the author of the post -- Orin Kerr, who'd be a refutation of the phenomenon of ideologically motivated reasoning if he weren't a mere "N of 1"-- designed the post to make readers see that with their own eyes regardless of what they "saw with their own eyes" in the video. Nice.

Friday
Jan272012

Hey, again, Chris Mooney...

Hi, Chris.

Your response was very thoughtful -- and educational! the connection to Haidt's moral psychology research added an important dimension -- as always. Thanks!

As you can see, in "Hey Chris Mooney ...," I didn't actually have in mind the project to advance the science of science communication.

I also didn't -- don't -- have in mind the "framing of science" as a communication strategy aimed at promoting support for enlightened policies, better democratic deliberations, etc., as valuable as those things might be.

I have in mind the idea that enjoyment of the wonder, as well as the wisdom, of scientific knowledge should be viewed as a good that a Liberal society enables all its citizens readily to enjoy without regard to their moral or cultural or ideological or religious orientations.

I think our Liberal society isn't doing this as well as it should. 

I'm pretty sure that it is a lot easier to build into one's life the thrill of seeing our species resolve the mysteries of nature (inevitably revealing even more astonishing mysteries) if one has a particular set of cultural commitments (ones I have, in fact) than if one has a very different set.  

The reason, in my view, is not that there is something antagonistic to science in the latter set of commitments.

Rather, it is that the content of the information that science communicators are conveying (with tremendous craft; some people are happy to be alive in the age of the microwave oven or on-demand movies; I am glad to be here when it possible to get continuous streams of great science reporting from sources like ScienceNowNot Exactly Rocket ScienceDot Earth, etc.) tends to be embedded in cultural meanings that fit one outlook much better than another. 

That's why I mentioned the "hypothetical citizen" (who is not hypothetical) who wants science to show him or her all the miraculous devices in God's workshop. He or she gets just as much of a thrill in getting to know something about how much our species knows as I do, but doesn't get to experience it nearly as readily or as easily. 

And that bothers me. It bothers me a bit because it might well be contributing to the pathology that is attacking the discussion of climate change in our society. But more, it just bothers me because I think that that's just not the way things should be in a good society.
 

For sure, the science of science communication is a source of insight on how to deal with this problem.

But if the Liberal Republic of Science is suffering from this sort of imperfection (I truly think it is; do you feel otherwise?), then it is science journalists and related professionals (e.g., science documentary producers) who will have to remedy it -- by including attention to this goal in their shared sense of mission, and by using all the knowledge they can gather from all sources (including their own practical experimentation) to carry it out.

Thursday
Jan262012

Hey, Chris Mooney ... (or the Liberal Republic of Science project)

Hi, Chris.

You've been telling us a lot recently about the differences in how "liberals" and "conservatives" think (and admitting, very candidly and informatively, that whether they really do and what significance that might have are complicated and unresolved issues). You have a book coming out, The Republican Brain. I look forward to reading it. I really do.

But I have a question I want to ask you. Or really, I have a thought, a feeling, that I want to share, and get your reaction to.

Imagine someone (someone very different from you; very different from me)-- a conservative Republican, as it turns out--who says: "Science is so cool -- it shows us the amazing things God has constructed in his cosmic workshop!"

Forget what percentage of the people with his or her cultural outlooks (or ideology) feel the way that this particular individual does about science (likely it is not large; but likely the percentage of those with a very different outlook -- more secular, egalitarian, liberal -- who have this passionate curiosity to know how nature works is small too. Most of my friends don't--hey, to each his own, we Liberals say!).

My question is do you (& not just you, Chris Mooney; we--people who share our cultural outlooks, worldview, "ideology") know how to talk to this person? Talk to him or her about climate change, or about whether his or her daughter should get the HPV vaccine? Or even about, say, how chlorophyll makes use of quantum mechanical dynamics to convert sunlight into energy? I think what "God did in his/her workshop" there would blow this person's mind (blows mine).

Like I said, I look forward to reading The Republican Brain.

But there's another project out there -- let's call it the Liberal Republic of Science Project -- that is concerned to figure out how to make both the wisdom and the wonder of science as available, understandable, and simply enjoyable to citizens of all cultural outlooks (or ideological "brain types") as possible.

The project isn't doing so well. It desperately needs the assistance of people who are really talented in communicating science to the public.

I think it deserves that assistance.  

Wouldn't you agree?

Thursday
Jan262012

Efforts at promoting healthier diet undermined by mixed messaging?

Forks Over Knives is one of several recent films concerned with the so called 'obesity epidemic' and urging dietary reform. (See also Killer At Large; Food, Inc.; Planeat). These films are attempting to convey an important message, however, I am concerned that their persuasive tactics – namely, condemning national industry and linking obesity to global warming – run the risk of culturally polarizing healthier eating, a seemingly secular, universally appealing value. The films start out with important, on point information: establishing the ‘obesity epidemic’ as a significant public health issue: one third of adults and 17% of children are obese, one third of children are overweight, resulting in high blood pressure, high cholesterol, and early onset diabetes. Obesity-associated high cholesterol, diabetes, cardiovascular disease and stroke (two of the leading causes of death) contributes significantly to the U.S.’ extraordinarily high per capita cost of health care, according to CDC.  The films then present evidence that diets high in cholesterol from animal products, saturated fat, and sugar likely cause obesity and associated health risks, and suggest dietary reform.

But instead of staying on this narrow message – eat healthier to avoid these health risks – they take the argument further. Here’s where they risk undermining receptiveness to their main message by unnecessarily making two culturally polarizing arguments: (a) they take a strong anti-industry bent – urging we repudiate the exploitative national food industry (and switch to local farming, or raw vegan diets, etc.), and (b) they link obesity to global warming. The films argue: ‘Not only should you reform diet to promote your own health, but you should change your diet in order to thwart the exploitative national food industry and save the planet from global warming.’ These films are not alone in connecting obesity to global warming. (See also, e.g., CNN; ABC; U.K. medical journal The Lancet; and Nature, Global Warming: Is Weight Loss a Solution?); one recent article even uses the tagline “obesity is the new global warming.” 

By infusing messages about healthier diet with demands to repudiate the national food industry and threats of global warming, these films seem to unnecessarily tie healthy eating to culturally polarizing issues. The call for healthier dieting urges reduced consumption of beef and dairy products – a deeply rooted American industrial and cultural tradition. This threat to beef and dairy, when joined with arguments to revolutionize the national food industry and stop global warming, unnecessarily implicates and threatens the entire traditional American industrial way of life (meat & potatoes) associated with dominance and masculinity – trucks, farms, factories, steaks and burgers.  It seems that this connection – reform your diet in order to stop exploitative national industry and avert global warming – might make the idea of dietary reform particularly threatening to hierarchical values. This might induce biased processing, or cause some audience members to discredit (out of cultural defensiveness) evidence on the risks of over-consumption of animal product cholesterol, saturated fat, and sugar. Thus, generating culturally protective resistance to dietary reform that promotes the seemingly secular, universal values of health and longevity. One commentator writing about Forks Over Knives, otherwise receptive to film’s message about dietary reform, captures this problem: “[T]he documentary just may be the Inconvenient Truth of the digestive system… My problem with the documentary is where it crosses into puritanical proselytizing about the value of a vegan lifestyle. Here food becomes something unappetizingly pragmatic, and elements of what eating means to a society – from cultural to religious to familial – are downplayed.”

There has been great resistance from parents to improving school lunch programs, loaded with fatty, high cholesterol, and sugary ingredients that have been linked to obesity and associated health problems.   Resistance persists even when the schools are shown they can produce healthy lunches for the same cost, without much structural change. Certainly, there is institutional and industry resistance to change, but I wonder whether part of parental resistance (i.e., parents insisting that french fries be served at least three times a week) is a defensive response to dietary reform perceived as a cultural threat? Messages aiming to encourage healthier eating should be cautious to avoid the implication that healthier dieting requires rejecting an entire lifestyle as American as, well, McDonalds drive thru windows and apple pie.

Wednesday
Jan252012

Is cultural cognition a bummer? Part 2

This is the second of two posts addressing “too pessimistic, so wrong”: the proposition that findings relating to cultural cognition should be resisted because they imply that it’s “futile” to reason with people.

In part one, I showed that “too pessimistic, so wrong”—in addition to being simultaneously fallacious and self-refuting (that’s actually pretty cool, if you think about it)—reflects a truncated familiarity with cultural cognition research. Studies of cultural cognition examine not only how it can interfere with open-minded consideration of scientific information but also what can be done to counteract this effect and generate open-minded evaluation of evidence that is critical of one’s existing beliefs.

Now I’ll identify another thing that “too pessimistic, so wrong” doesn't get: the contours of the contemporary normative and political debate over risk regulation and democracy.

2.  "Too pessimistic, so wrong" is innocent of the real debate about reason and risk regulation.

Those who make the “too pessimistic, must be wrong” argument are partisans of reason (nothing wrong with that). But ironically, by “refusing to accept” cultural cognition, these commentators are actually throwing away one of the few psychologically realistic programs for harmonizing self-government with scientifically enlightened regulation of risk.

The dominant view of risk regulation in social psychology, behavioral economics, and legal scholarship asserts that members of the public are too irrational to figure out what dangers society faces and how effectively to abate them. They don't know enough science; they have to use emotional heuristic substitutes for technical reasoning. They are dumb, dumb, dumb.

Well, if that is right, democracy is sunk. We can't make the median citizen into a climate scientist or a nuclear physicist. So either we govern ourselves and die from our stupidity; or, as many influential commentators in the academy (one day) and government (the next) argue, we hand over power to super smart politically insulated experts to protect us from myriad dangers.

Cultural cognition is an alternative to this position. It suggests a different diagnosis of the science communication crisis, and also a feasible cure that makes enlightened self-government a psychologically realistic prospect.

Cultural cognition implies that political conflicts over policy-relevant science occur when the questions of fact to which that evidence speaks become infused with antagonistic cultural meanings.

This is a pathological state—both in the sense that it is inimical to societal well-being and in the sense that it is unusual, not the norm, rare.  

The problem, according to the cultural cognition diagnosis, is not that people lack reason. It is that the reasoning capacity that normally helps them to converge on the best available information at society’s disposal is being disabled by a distinctive pathology in science communication.

The number of scientific insights that make our lives better and that don’t culturally polarize us is orders of magnitude greater than the ones that do. There’s not a “culture war” over going to doctors when we are sick and following their advice to take antibiotics when they figure out we have infections. Individualists aren’t throttling egalitarians over whether it makes sense to pasteurize milk or whether high-voltage power lines are causing children to die of leukemia.

People (the vast majority of them) form the right beliefs on these and countless issues, moreover, not because they “understand the science” involved but because they are enmeshed in networks of trust and authority that certify whom to believe about what.

For sure, people with different cultural identities don’t rely on the same certification networks. But in the vast run of cases, those distinct cultural certifiers do converge on the best available information. Cultural communities that didn’t possess mechanisms for enabling their members to recognize the best information—ones that consistently made them distrust those who do know something about how the world works and trust those who don’t—just wouldn’t last very long: their adherents would end up dead.

Rational democratic deliberations about policy-relevant science, then, doesn’t require that people become experts on risk. It requires only that our society take the steps necessary to protect its science communication environment from a distinctive pathology that enfeebles ordinary citizens from using their (ordinarily) reliable ability to discern what it is that experts know.

“Only” that? But how?

Well, that’s something cultural cognition addresses too — in the studies that “too pessimistic, so wrong” ignores and that I described in part one.

Don’t get me wrong: the program to devise strategies for protecting the science communication enviornment has a long way to go.

But we won’t even make one step toward perfecting the science of science communication if we resolve to “resist” evidence because we find its implications to be a bummer.

Reference: 

 

 

Saturday
Jan212012

R^2 ("r squared") envy

Am at a conference & a (perfectly nice & really smart) guy in the audience warns everyone not to take social psychology data on risk perception too seriously: "some of the studies have R2's of only 0.15...."

Oy.... Where to start? Well how about with this: the R2 for viagra effectiveness versus placebo ... 0.14!

R2 is the "percentage of the variance explained" by a statistical model. I'm sure this guy at the conference knew what he was talking about, but arguments about whether a study's R2 is "big enough" are an annoying, and annoyingly common, distraction. 

Remarkably, the mistake -- the conceptual misundersandings, really -- associated with R2 fixation were articulated very clearly and authoritatively decades ago, by scholars who were then or who have become since giants in the field of empirical methods: 

I'll summarize the nub of the mistake asssociated with R2 fixation but it is worth noting that the durability of it suggests more than a lack of information is at work; there's some sort of congeniality between R2 fixation and a way of seeing the world or doing research or defending turf or dealing with anxiety/inferiority complexs or something... Be interesting for someone to figure out what's going on.

But anyway, two points:

1.  R2 is an effect size measure, not a grade on an exam with a top score of 100%. We see a world that is filled with seeming randomness. Any time you make it less random -- make part of it explainable to some appreciable extent by identifying some systematic process inside it -- good! R2 is one way of characterizing how big a chunk of randomness you have vanquished (or have if your model is otherwise valid, something that the size of R2 has nothing to do with). But the difference between it & 1.0 is neither here nor there-- or in any case, it has nothing to do with whether you in fact know something or how important what you know is.

2. The "how important what you know is" question is related to R2 but the relationship is not revealed by subtracting Rfrom 1.0. Indeed, there is no abstract formula for figuring out "how big" R2 has to be before the effect it mesaures is important. Has extracting that much order from randomness done anything to help you with the goal that motivated you to collect data in the first place? The answer to that question is always contextual. But in many contexts, "a little is a lot," as Abelson says. Hey: if you can remove 14% of the variance in sexual performance/enjoyment of men by giving them viagra, that is a very practical effect! Got a headache? Take some ibuprofen (R2 = 0.02).

What about in a social psychology study? Well, in our experimental examination of how cultural cognition shaped perceptions of the behavior of political protestors, the Rfor the statistical analysis was 0.19. To see the practical importance of an effect size that big in this context, one can compare the percentage of subjects identified by one or another set of cultural values who saw "shoving," "blocking," etc., across the experimental conditions.

If, say, 75% of egalitarian individualists in the abortion-clinic condition but only 33% of them in the military-recruitment center condition thought the protestors were physically intimidating pedestrians; and if only 25% of hierarchical communitarians in the abortion-clinic but 60% of them in the recruitment-center condition saw a protestor "screaming in the face" of a pedestrian--is my 0.19 R2 big enough to matter? I think so; how about you?

There are cases, too, where a "lot" is pretty useless -- indeed, models that have notably high R2's are often filled with predictors the effects of which are completely untheorized and that add nothing to our knowledge of how the world works or of how to make it work better.

Bottom line: It's not how big your R2 is; it's what you (and others) can do with it that counts! 

reference: Meyer, G.J., et al. Psychological testing and psychological assessment: A review of evidence and issues. Am Psychol 56, 128-165 (2001).

 

Friday
Jan202012

Is cultural cognition a bummer? Part 1

Now & again I encounter the claim (often in lecture Q&A, but sometimes in print) that cultural cognition is wrong because it is too pessimistic. Basically, the argument goes like this:

Cultural cognition holds that individuals fit their risk perceptions to their group identities. That implies it is impossible to persuade anybody to change their minds on climate change and other issues—that even trying to reason with people is futile. I refuse to accept such a bleak picture. Instead, I think the real problem is [fill in blank—usually things like “science illiteracy,” “failure of scientists to admit uncertainty,” “bad science journalism,” “special interests distorting the truth”]

What’s wrong here?

Well, to start, there’s the self-imploding logical fallacy. It is a non sequitur to argue that because one doesn’t like the consequences of some empirical finding it must be wrong. And if what someone doesn’t like—and therefore insists “can’t be right”— is empirical research demonstrating the impact of a species of motivated reasoning, that just helps to prove the truth of exactly what such a person is denying.

Less amusingly and more disappointingly, the “too pessimistic, must be wrong“ fallacy suggests that the person responding this way is missing the bigger picture. In fact, he or she is missing two bigger pictures:

  • First, the “too pessimistic, so wrong” fallacy is looking only at half the empirical evidence: studies of cultural cognition show not only which communication strategies fail and why but also which ones avoid the identified mistake and thus work better.
     
  • Second, the “too pessimistic, so wrong” fallacy doesn’t recognize where cultural cognition fits into a larger debate about risk, rationality, and self-government. In fact, cultural cognition is an alternative—arguably the only psychologically realistic one—to an influential theory of risk perception that explicitly does assert the impossibility of reasoned democratic deliberation about the dangers we face and how to mitigate them.

I’m going to develop these points over the course of two posts.

  1. Cultural cognition theory doesn’t deny the possibility of reasoned engagement with evidence; it identifies how to remove a major impediment to it.

People have a stake in protecting the social status of their cultural groups and their own standing in them. As a result, they defensively resist—close their minds to consideration of—evidence of risk that is presented in a way that threatens their groups’ defining commitments.

But this process can be reversed. When information is presented in a way that affirms rather than threatens their group identities, people will engage open-mindedly with evidence that challenges their existing beliefs on issues associated with their cultural groups.

Not only have I and other cultural cognition researchers made this point (over & over; every time, in fact, we turn to normative implications of our work), we’ve presented empirical evidence to back it up.

Consider:

Identity-affirmative & narrative framing. The basic idea here is that if you want someone to consider the evidence that there's a problem, show the person that there are solutions that resonate with his or her cultural values.

E.g., Individualists values markets, commerce, and private orderings. They are thus motivated to resist information about climate change because they perceive (unconsciously) that such information, if credited, will warrant restrictions on commerce and industry.

But individualists love technology. For example, they are among the tiny fraction of the US population that knows what nanotechnology is, and when they learn about it they instantly think it's benefits are high & risks low. (When egalitarian communitarians—who readily credit climate change science— learn about nanotechnology, in contrast,  they instantly think its risks outweigh benefits; they adopt the same posture toward it that they adopt toward nuclear power. An aside, but only someone looking at half the picture could conclude that any position on climate change correlates with being either “pro-“ or “anti-science” generally).

So one way to make individualists react more open-mindedly to climate change science is to make it clear to them that more technology—and not just restrictions on it-- are among the potential responses to climate change risks. In one study, e.g., we found that individualists are more likely to credit information of the sort that appeared in the first IPCC report when they are told that greater use of nuclear power is one way to reduce reliance on green-house gas-emitting carbon fuel sources.

More recently, in a study we conducted on both US & UK samples, we found that making people aware of geoengineering as a possible solution to climate change reduced cultural polarization over the validity of scientific evidence on the consequences of climate change. The individuals whose values disposed them to dismiss a study showing that CO2 emissions dissipate much more slowly than previously thought became more willing to credit it when they had been given information about geoengineering & not just emission controls as a solution.

These are identity-affirmation framing experiments. But the idea of narrative is at work in this too. Michael Jones has done research on use of "narrative framing" -- basically, embedding information in culturally congenial narratives -- as a way to ease culturally motivated defensive resistance to climate change science. Great stuff.

Well, one compelling individualist narrative features the use of human ingenuity to help offset environmental limits on growth, wealth production, markets & the like. Only dumb species crash when they hit the top of Malthus's curve; smart humans, history shows, shift the curve.

That's the cultural meaning of both nuclear power and geoengineering. The contribution they might make to mitigating climate change risks makes it possible to embed evidence that climate change is happening and is dangerous in a story that affirms rather than threatens individualists’ values. Hey—if you really want to get them to perk their ears up, how about some really cool nanotechnology geoengieneering?

Identity vouching. If you want to get people to give open-minded consideration to evidence that threatens their values, it also helps to find a communicator who they recognize shares their outlook on life.

For evidence, consider a study we did on HPV-vaccine risk perceptions. In it we found that individuals with competing values have opposing cultural predispositions on this issue. When such people are shown scientific information on HPV-vaccine risks and benefits, moreover, they tend to become even more polarized as a result of their biased assessments of it.

But we also found that when the information is attributed to debating experts, the position people take depends heavily on the fit between their own values and the ones they perceive the experts to have.

This dynamic can aggravate polarization when people are bombarded with images that reinforce the view that the position they are predisposed to accept is espoused by experts who share their identities and denied by ones who hold opposing ones (consider climate change).

But it can also mitigate polarization: when individuals see evidence they are predisposed to reject being presented by someone whose values they perceive they share, they listen attentively to that evidence and are more likely to form views that are in accord with it.

Look: people aren’t stupid. They know they can’t resolve difficult empirical issues (on climate change, on HPV-vaccine risks, on nuclear power, on gun control, etc.) on their own, so they do the smart thing: they seek out the views of experts whom they trust to help them figure out what the evidence is. But the experts they are most likely to trust, not surprisingly, are the ones who share their values.

What makes me feel bleak about the prospects of reason isn’t anything we find in our studies; it is how often risk communicators fail to recruit culturally diverse messengers when they are trying to communicate sound science.

I refuse to accept that they can’t do better!

Part 2 here.

References:

Jones, M.D. & McBeth, M.K. A Narrative Policy Framework: Clear Enough to Be Wrong? Policy Studies Journal 38, 329-353 (2010).

Kahan, D. (2010). Fixing the Communications Failure. Nature, 463, 296-297.

Kahan, D. M., Braman, D., Cohen, G. L., Gastil, J., & Slovic, P. (2010). Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. L. & Human Behavior, 34, 501-16.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4, 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92, Jan. 9, 2012.

Sherman, D.K. & Cohen, G.L. Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science 11, 119-123 (2002).

Sherman, D.K. & Cohen, G.L. The psychology of self-defense: Self-affirmation theory. in Advances in Experimental Social Psychology, Vol. 38 (ed. Zanna, M.P.) 183-242 (2006).

 

Saturday
Jan142012

Handbook of Risk Theory

Really really great anthology:

Roeser, S., Hillerbrand, R., Sandin, P. & Peterson, M. Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, (Springer London, Limited, 2012).

 Its edited by Sabine Roeser, who herself has done great work to integrate empirical study of emotion and risk with a sophisticated philosophical appreciation of their significance.  

Too bad the set costs so darn much! Guess Springer figures only university libraries will want to buy it (wrong!), but even they aren't made of cash!