follow CCP

Recent blog entries
Wednesday
May302012

Who has a better comprehension of science--"skeptics" or "nonskeptics"?

Neither, as far as I can tell.

This wasn’t a question we tried to answer directly or reported data on in our Nature Climate Change paper.

But I have been asked a few times now about a Fox News report on our study that states that those who are less concerned about climate change scored “57%” and those who are more concerned “56%” in our measure of science comprehension.

I am guessing the reporter derived the conclusion from this graphic, which is one I produced and circulated to people, including the reporter, in response to questions about a working paper that reported data from the study ultimately published in NCC.

It shows the mean or average number of correct responses on the combined science literacy/numeracy scale (a measure of "science comprehension,” essentially) for study subjects whose responses put them in the top 50% & bottom 50% of the sample on "climate change risk perceptions," respectively.

The bottom 50% got, on average, 12.6 out of 22 correct. The top 50% got 12.3.

The "56%" & "57%" figures are not in the Figure--or in anything else related to our study. But they are the numbers one gets when one divides 12.3 & 12.6 by 22, respectively.  

As can can be seen, this difference is not statistically significant. Not even close. Indeed, I put the graphic together so that I could answer the stock "who knows more" query-- I call it the "yeah, but whose is bigger" question -- by saying "no one, see!"

If there are people out there (apparently there are; I'm getting lots of email...) who think this is meaningful evidence that one side knows more than the other about science, they really are missing the point. In fact, they are making the kind of mistake that helps explain how it is that the "smarter" half of the population gets a score of 57% on a measure like this.

The gap between those who know more science and those who know less doesn't explain conflict over climate change science in our society.

But it's beyond question that the low average state of science literacy is a condition that detracts from our capacity for enlightened self-government.

Monday
May282012

"How confident should we be ..."

A thoughtful journalist asks in relation to our  Nature Climate Change  study:

It would be really helpful to get your reflection on the research.   In particular, I'm interested in the polarising effect you were able to identify. From the figure (Fig.2) this appears to be quite subtle, albeit in the opposite direction to that which was predicted by the SCT thesis.   It would be great if you could identify to what extent/how confident we can be to say that increasing numeracy and literacy polarises risk perception about climate change, and what can explain this polarisation.

This was such a thoughtful way of putting the question, I felt impelled -- only in part by OCD; one shouldn't ask a good question if one wants an imprecise, casual response -- to give a reasonably precise & detailed answer:

1.  All  study results are provisional. That's in the nature of science. Valid studies give you more evidence than you otherwise would have had to believe something. They never "settle" the issue; one continues to revise one's assessment of what to treat as true and how likely it is not to be as more valid studies, more valid evidence, accumulates. Forever & ever (Popper 1962).

So it is never sensible (it is a misundersanding of the nature of empirical proof) to say, "this study proves this" or "this study doesn't necessailry prove that" etc. Instead it is very sensible to ask, as you have, "how confident should we be" in a particular conclusion given the evidence presented in a particular study.  

2. As you know, our study investigated two hypotheses: the science comprehension thesis (SCT), which attributes public conflict over climate change to deficits in science comprehension; and the cultural cognition thesis (CCT), which asserts that conflict over climate change is a consequence of the unconscious tendency of individuals to fit the beliefs about risk to positions that dominate in their group, and which in its strongest form would say that this tendency will be reinforced or magnified by grater science comprehension, which can be used to promote such fitting. 

3. The study furnishes relatively strong  evidence that SCT is incorrect. SCT would predict that cultural polarization abates as science comprehension increases. Even if we had found that the impact of science comprehension on cultural polarization was  nil, the study would supply the basis for a high degree of confidence that public conflict over climate change is not a consequence of low science comprehension. 

4. The study is consistent with CCT and furnishes modest  evidence that CCT in its strongest form is correct. That position would predict that cultural polarization will be greater among individuals with the greatest science comprehension. The results fit that hypothesis-- on both climate change & nuclear power risks; the latter helps to furnish more reason to think that the effect is genuine one for climate change.

But I'd say only modest evidence mainly because of the design of the study. It's observational --correlational -- only. Observed correlations that fit a hypothesis supply supporting evidence in proportion to which they rule out other explanations. Maybe something else is going on that causes both increased science comprehension & increased polarization in certain people. The only way to tell is through (well designed) experiments. We are conducting some now. 

5.    You note the effect size of the interaction is modest. Maybe; it's hard to know how to characterize such things in the abstract (and realize, too, that polarization is so great even for low-comprehending respondents that it would be hard for it to grow much for high-comprehending respondents!).

The size of the interaction effect we observed is probably about what you would expect for an observational study, and if the source of the effect is CCT, it should be easy to produce much more dramatic effects through properly designed experiments   (Cohen, Cohen, Aiken & West 2003, pp. 297-98). So rather than try to extract more information from the effect size about how confident or not to be in the strong CCT position, it makes sense to do experiments. Again that's what we are now doing.

6. By itself, then, the study furnishes only modest reason to be confident in CCT (in its strongest form) relative to other possibilities (one has to be able to identify such possibilities, of course, in order to have any reason to doubt CCT; I can think of possibilities, certainly). I myself am more than modestly confident -- but only because this study is not the only thing I count as evidence that (strong) CCT is correct.

7. An aside: Nothing in our study suggests that making people more science literate or numerate  causes  polarization. If CCT is correct, there is something about climate change (and certain other issues) that makes people try to maximize the fit between their beliefs and positions that predominate within their groups, which themselves are impelled into opposing stances on certain facts. That thing is the cause in the practical, normative sense. We should find it and get rid of it.

references:

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Mahwah, N.J.: L. Erlbaum Associates.

Popper, K. R. (1962). Conjectures and refutations; the growth of scientific knowledge. New York,: Basic Books.

 

Sunday
May272012

Climate change polarization "fast and slow"

Our study on the effects of science literacy and numeracy on climate change risk perceptions is now out in Nature Climate Change. We find that individuals who display high comprehension of science (i.e., those who score higher in science literacy and numeracy) are in fact more culturally polarized than those who display low science comprehension.

I’ve commented before on how these data relate to the popular surmise that seeming public ambivalence toward evidence on climate change reflects the predominance of what Kahneman (in his outstanding book Thinking: Fast & Slow, among other places) calls “system 1” reasoning (emotional, unconscious, error-prone) on the part of members of the public.

Our findings don’t fit that popular hypothesis. On the contrary, they show that individuals disposed to use system 2—conscious, reflective, deductive—reasoning (a disposition measured by the numeracy scale) are even more culturally divided than those disposed to use system 1.

The interesting thing is that Kahneman himself recognized just last week that system 2 as well as system 1 might be implicated in climate change conflict.

In his Sackler Lecture (strongly recommended viewing) at the National Academy of Science’s Science of Science Communication Colloquium (say that three times fast), Kahneman explicitly commented on the connection between his theory of dual process reasoning and cultural cognition.

He recognized that one would expect, consistent with System 1, that ordinary members of the public would fit their perceptions of climate change risk to emotional resonances, which themselves might vary systematically across persons with diverse values.

At the same time, however, Kahneman argued against assuming system 2 would sort this disagreement out. Often “system 2 is just the spokesperson for system 1,” he said. In other words, people are likely to recruit their systematic, “slow” reasoning skills when necessary to reach the conclusion they prefer and not rely only on “fast” heuristic ones.

The point of the study, in fact, was to test pit two plausible alternative hypotheses about cultural cognition and dual process reasoning against one another. 

One attributes the influence of cultural values on risk perception to system 1, viewing cultural cognition as essentially a heuristic substitute for the ability to comprehend complicated scientific evidence.  Our findings (including the absence of any overall connection between science literacy and climate change concern) undermine that view.

The other hypothesis views cultural cognition as a species of motivated reasoning that is as likely to shape system 2 as well as system 1. Our finding of increased polarization among the most science comprehending members of our sample lends support to this position.

In the paper, we suggest that the alliance between cultural cognition and system 2 is actually perfectly rational at an individual level. Ordinary members of the public can't have a much bigger stake in forming views that match those of their peers on controversial issues than they do in getting the science right on climate change: making a mistake on the latter has zero impact on the risks they face (nothing they do as individual voters or consumers matters enough to make a difference) but screwing up the former can result in their being shunned by people whose emotional and material support they covet. 

So everyone tries to fit the evidence to positions that predominate in his or her group. And those who know a lot of science and are good at technical reasoning do an even better job.

The result is a tragedy – of the risk perceptions commons—and it occurs whether people reason “fast” or “slow.”

Still, once we have determined through systematic thought and actual evidence that system 1 alone is not to blame, we can then turn to identifying (again, through empirical testing; creative guessing is good only for hypotheses) what sorts of communication strategies might enable culturally diverse citizens to use their reasoning in a manner that benefits them all.

citation:

.

 

 

Thursday
May242012

I see "They Saw a Protest"

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction just came out in the Stanford Law Review.

The article--which was a team effort involving me, David "Shining a Light" Hoffman, Danieli "I'll Have Another" Evans, Donald "Shotgun" Braman & Jeff "Bear Claw" Rachlinski--features an experiment that tests the impact of cultural cognition on perceptions of facts relevant to the line between "speech" and "conduct" under the First Amendment.

Experiment subjects were assigned to play the role of jurors in a case in which protesters are suing the police for breaking up the protestors' demonstration. The police, subjects were told, claim the protestors were threatening onlookers and blocking their access to a building. The protestors say they were just engaged in impassioned advocacy.

The parties agree that the key piece of evidence is a video of the protest. The subjects are instructed to watch the video and then report what they saw and determine whether it counts as "threatening," "intimidating" or "blocking" under a specified law.

The experimental manipulation involved the supposed nature of the protest. Half the subjects were told that the protestors are demonstrating against abortion rights in front of an abortion clinic. The other half were told that the protestors are objecting to the military's then-existing "Don't ask, don't tell" policy outside a college campus recruitment center.

Consistent with our hypotheses, we found that what subjects saw depended on whether the position the protestors were represented to be taking was congenial or hostile to the subjects' own cultural outlooks. Thus, egalitarian individualists disagree with hierarchical communitarians who are in the same experimental condition (either "abortion clinic" or "military recruitment center") but disagree with other egaligatarian individualists who are in the opposing experimental condition.

The disagreement, moreover, is over facts--like whether the protestors "screamed in the face" of pedestrians and blocked them from entering the clnic/recruitment center. 

This is a problem for the First Amendment, which tries to impose an obligation of state neutrality by confining regulation of putative expression to harms that can be defined independently of any negative reaction people might have toward the speaker's ideas. People have a hard time applying this rule, we find, because they are unconsciously motivated to see these sorts of "noncommunicative harms" -- like threats, intimidation, blocking -- when behavior conveys an idea that offends their values.

The study was patterned on a classic 1950s study in social psychology entitled "They Saw a Game." In it, researchers found that students from two Ivy League colleges were more likely to see the penalty calls of a referee as correct or incorrect depending on whether the rule violation was being attributed to their college's football team or its opponent. This was probably the first experimental demonstration of "motivated reasoning."

The most fun part of doing the study was making the movie. We tried really hard but couldn't find any stock footage of demonstrations that could plausibly be described as either an abortion protest or a military recruitment center protest. People who engage in one tend to look very different from the other.

Fortunately (for us), members of the infamous Westboro Church came to town (Cambridge, Massachusetts, in the winter of 2009). When they show up to preach hate against gays and lesbians, so do massive numbers of counterdemonstrators.  

We managed to cull quite a number of useable scenes from 90 minutes of footage, and were able to confirm in a pretest (of judges and lawyers!) that viewers would believe whichever of the stories we told them about what the demonstration was about, and where it occurred.

Then in an even greater stroke of luck, the U.S. Supreme Court granted review in a case in which the parents of a soldier at whose funeral the Church members demonstrated were awarded $5 million in damages. The Court overturned the verdict on the ground the distress of the emotional distress of the parents was a noncognizable "communicative harm" under the First Amendment.

We were able to kick out a timely study result showing that if a state now passes a law prohibiting groups like the Westboro Church from "intimidating" funderal attendees, the jury's factual determinations will likely be unconsciously guided by the very sorts of things the Court said were not proper bases for damages in the Westboro case. Oh well!

Actually, our point is that it's not enough (maybe not even of any use) to have a doctrine that seems great as a matter of political philosophy if that doctrine imposes psychologically unrealistic demands on decisonmakers.

Constitutional law needs a dose of psychological realism. 

Monday
May212012

NAS says: Listen to the science of science communication

National Academy of Science President Ralph Cicerone (foreground) & Nobelist Daniel Kahneman during the Q&A that followed Kahneman's (outstanding) lecture.

This picture really captures it, I think.

The NAS's Science of Science Communication Sackler Colloquium is modeling what the practice of science & science-informed policymaking needs to do: start listening to the science of science communication, the foundational insights of which reflect the work of Kahneman (and Amos Tversky, Paul Slovic & Baruch Fischhoff, among others) on risk perception.

I feel very optimistic today!

 

Sunday
May202012

Protecting the science communication environment: sneak preview

 

Am embarking soon (was supposed to already; small travel misadventure) for NAS Science of Science Communication colloquium. Attached are slides that I'm sending my co-panelists & commentators (I think they'd like a text but I don't speak from one, or use notes, when doing a talk).

Probably will have to shrink it -- so maybe this is "director's cut" as well as "sneak peek."

 

But if you have time on your hands, tune in (my talk is Tues. @3:15; agenda for event here).

Thursday
May172012

The science of protecting the science communication environment

Am giving a talk on Tuesday at the NAS's Sackler Colloquium on the Science of Science Communication. Was asked to submit an "exeuctive summary" for the benefit of commenters. This is it: 

The Science of Science Communication and Protecting the Science Communication Environment

Promoting public comprehension of science is only one aim of the science of science communication and is likely not the most important one for the well-being of a democratic society. Ordinary citizens form quadrillions of correct beliefs on matters that turn on complicated scientific principles they cannot even identify much less understand. The reason they fail to converge on beliefs consistent with scientific evidence on certain other consequential matters—from climate change to genetically modified foods to compusory adolescent HPV vaccination—is not the failure of scientists or science communicators to speak clearly or the inability of ordinary citizens to understand what they are saying. Rather, the source of such conflict is the proliferation of antagonistic cultural meanings. When they become attached to particular facts that admit of scientific investigation, these meanings are a kind of pollution of the science communication environment that disables the faculties ordinary citizens use to reliably absorb collective knowledge from their everyday interactions. The quality of the science communication environment is thus just as critical for enlightened self-government as the quality of the natural environment is for the physical health and well-being of a society’s members. Understanding how this science communication environment works, fashioning procedures to prevent it from becoming contaminated with antagonistic meanings, and formulating effective interventions to detoxify it when protective strategies fail—those are the most critical functions science communication can perform in a democratic society.

In my remarks, I will elaborate on this conception of the science of science communication. I will likely illustrate my remarks with reference to findings on formation of HPV-vaccine risk perceptions, culturally biased assimilation of evidence of scientific consensus, the polarizing impact of science literacy and numeracy on climate change risk perceptions, and experimental forecasting of emerging-technology risk perceptions.  I’ll also describe the necessity of public provisioning to assure the quality of the science communication environment, which like the quality of the physical environment is a collective good that is unlikely to be secured by spontaneous private ordering.

If any of the other panelists would like to form a more vivid impression of my remarks, they might consider taking a look at:

1. Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010); and

2. Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Wednesday
May162012

Is Cultural Cognition Culture-Specific? 

Is cultural cognition culturally specific?  

I just read a great piece over on the PLoS Blog about the cultural specificity of many purportedly universal psychological biases / mechanisms.  As an example, the blog uses the famous Müller-Lyer Illusion.  You probably know of it.  In the image below, many people see the line on the right as longer than the one on the left.  

For almost a hundred years, social psychologists thought this a universal illusion.  It turns out, though, that this illusion is actually acute only in those who live in modern urban environments -- environments where straight lines, flat sides, and sharp corners are common.  When, in 1966, Marshall H. Segall conducted a study across cultural groups, he found tremendous variation (as illustrated in the graph below). 

For folks who are interested in the phenomenon of cultural cognition, this raises an interesting question: Is cultural cogntion itself culture-bound?  The answer, I think, is either "probably yes" or "probably no" depending on what is meant by "culture-bound".  

The "probably yes" answer obtains if one were to try to use the same value measures across highly distinct cultural groups.  There is no reason to believe that San foragers or the Fang are divided over the questions that comprise the cultural value measures we use to distinguish US subjects from one another.  It wouldn't make sense (at least without more evidence) for us to presume our measures are universal.  

But that isn't really what the PLoS Blog post is about.  It asks whether the underlying phenomenon itself is generalizable.  One could broaden the way that such illusions are characterized in order to account for visual training and local adaptions.  Do people see view depth-cues that are relevant to their conceptual contexts.  The newly recast "local cues for depth perception" bias could still plausibly be universal. 

The phenomenon of cultural cognition, I would argue, is closer to the latter than the former.  It is one in which people develop factual beliefs that support or are consistent with their preferred social orderings (typically with the life-ways and values of their in-groups given high status).  If viewed this way, the answer is "probably no" because the theory derives from observations by anthropologists across many different cultural groups.  (I can't say "definitively no" or even "almost certainly no" since we haven't done extensive work across these non-Western cultural groups ourselves.)  More recently, a more general form of this has been studied as "motivated cognition" by social psychologists.  For cultural cognition as a general concept to be culture-bound, the phenomenon of motivated cognition itself would have to be culture-bound.  And, because the idea of motivated cognition is something that we use to describe differences in belief-formation across cultures, it would be very hard to construe it as culture-bound as well.  

But then again, it may be that my sample is too limited -- indeed motivated cognition would suggest that I would be particularly motivated to not notice contrary evidence! Perhaps it just seems obvious to me that the everyone sees the world as shorter or longer as befits there preferred social order when, in fact, there are some groups who do not.  

But one thing we can be fairly certain of: these groups would have to be very distinct from the main groups involved in various forms of culture wars in the United States.  As Dan has pointed out in numerous posts at this point, there is very strong evidence that whatever cultural groups might be immune to cultural cognition, they are not the cultural groups who are involved in popular political debates in this country.  Your cultural adversary may fall foul of cultural cognition, but the fact that you have cultural adversary suggests that you are just as likely to yourself. 

Tuesday
May152012

Wild wild horses couldn't drag me away: four "principles" for science communication and policymaking

Was invited to give a presentation on "effective science communication" for the National Academy of Sciences/National Research Council committee charged with preparing a report on wild horse & burro population management.

I happily accepted, for two reasons.

First, it really heartens and thrills me that the NAS gets the importance of integrating science and policymaking, on the one hand, with the science of science communication on the other. Indeed, as the NAS's upcoming Sackler Colloquium on the Science of Science Communication attests, NAS is leading the way here. 

Second, it only took me about 5 minutes of conversation with Kara Laney, the NAS Program Officer who is organizing the NRC committee's investigation of wild horse population management, to persuade me that the science communication dimension of this issue is fascinating. The day I spent at the committee's meeting yesterday corroborated that judgment.

Not knowing anything about the specifics of wild-horse population management (aside from what everyone picks up just from personal experience & anecdote, etc), I confined myself to addressing research on the "science communication problem" -- the failure of ample and widely disseminated science to quiet public dispute over policy-relevant facts that admit of scientific investigation. Like debates over climate change, HPV vacccination, nuclear power, etc.,  the dispute over wild-horse management falls squarely into that category.

After summarizing some illustrative findings (e.g., on the biasing impact of cultural outlooks on perceptions of scientific consensus; click on image for slides), I offered "four principles":

First, science communication is a science.

Seems obvious--especially after someone walks you through 3 or 4 experiments -- but in fact, the assumption that sound science communicates itself is the origin of messes like the one over climate change. As I said, NAS is now committed to remedying the destructive consquences of this attitude, but one can't overemphasize how foolish it is to invest so much in policy-relevant science and then adopt a wholly ad hoc anti-scientific stance toward the dissemination of it.

Second, "science communication" is not one thing; it's 5 (± 2).

Until recent times, those who thought systematically about science communication were interetested either in helping scientists learn to speak in terms intelligible to curious members of the public or in training science journalists to understand and accurately decipher scientists' unintelligible pronouncements.

These are important things. But the idea that inarticulate scientists or bad journalists caused the climate change controversy, say, or that making scientists or journalists better communicators will solve that or other problems involving science and democratic decsionmaking is actually a remnant of the unscientific concepion of science communication-- a vestiges, really, of the idea that "facts speak for themselves," just so long as they are idiomatic, grammatical, etc.

As I explained in my talk, the disputes over climate change, the HPV vaccine, nuclear power, and gun control are not a consequence of a lack of clarity in science or a lack of science comprehension on the part of ordinary citizens.

The source of those controversies is a form of pollution in the science communication environment: antagonistic social meanings that get attached to facts and that interfere with the normally reliable capacity of ordinary people to figure out what's known (usually by identifying who knows what about what).  

Detoxifying the science communication environment and protecting it from becoming contaminated in the first place is thus another kind of "science communication," one that has very little to do with helping scientists learn to avoid professional jargon when they give interviews to journalists, who themselves have been taught how to satisfy the interest that curious citizens have to participate in the thrill and wonder of our collective intelligence.

Those two kinds of science communication, moreover, are different from the sort that an expert like a doctor or a finanancial planner has to engage in to help individuals make good decisions about their own lives. The emerging scientific insights on graphic presentation of data etc. also won't help fix problems like ones about climate change.

Still another form of science communication is the sort that is necessary to enable policymakers to make reliable and informed decisions under conditions of uncertainty. The NAS is taking the lead on this too -- and isn't laboring under the misimpression that what causes climate change is the "same thing" that has made judges accept finger prints and other bogus forms of forensic proof.

Finally, there is stakeholder science communication -- the transmission of knowledge to ordinary citizens who are intimately affected by and who have (or are at least entiled to have) a say in collective decisionmaking. That's mainly what the decisionmaking process surrounbding the wild-horse population is about.  There are scientific insights there, too-- ones having very little to do with graphic presentation of data  or with good writing skills or with the sort of pollution problem that is responsible for climate change.

Third, "don't ask what science communication can do for you; ask what you can do for science communication."

Having just told the committee that their "science communication problem" is one distinct from four others, I anticipated what I was sure would be their next question: "so what do we do?" 

Not surprisingly, that's what practical people assigned to communicate always ask when they are engaging scholars who use scientific methods to study science communication. They want some "practical" advice--directions, instructions, guidelines.

My answer is that they actually shouldn't be asking me or any other science-communication researcher for "how to" advice. And that they should be really really really suspicious of any social scientist who purports to give it to them; odds are that person has no idea what he or she is talking about.

Those who study science communication scientifically know something important and consequential, I'm convinced, about general dynamics of risk perception and science communication. But we know that only because we have investigated these matters in controlled laboratory environments-- ones that abstract from real-world details that defy experimental control and confound interpretation of observations.

Studies, in other words, are models. They enable insight that one couldn't reliably extract from the cacophony of real-world influences. Those insights, moreover, have very important real-world implications once extracted. But they do not themselves generate real-world communication materials.

The social scientists who don't admit this usually end up offering banalities, like "Know your audience." 

That sort of advice is based on real, and really important, psychological research. But it's pretty close to empty precisely because it's (completely) devoid of any knowledge of the particulars of the communication context at hand (like what characteristics genuinely define the "audience" that is to be known, and what there actually is to "know" about it).

The practical communicators -- the ones asking to be told what to do -- are the people who have that knowledge. So they are the ones who have to use judgment to translate the general insights into real-world communication materials.  

Experimentalists are not furnishing communicators with "shovel ready" construction plans. Rather they are supplying the communicators with reliable maps that tell them where they should dig and build through their own practical experimentation.

Once that process of experimental adaptation starts, moreover, the social scientist should then again do what she knows how to do: measure things.

She should be on hand to collect data and find out which sorts of real-world applications of knowledge extracted in the lab are actually working and which ones aren't. She can then share that new knowledge with more people who have practical knowledge about other settings that demand intelligent science communication -- and the process can be repeated.

And so forth and so on. Until what comes out is not a "how to" pamphlet but a genuine, evolving repository filled with vivid case studies, protocols, data collection and analysis tools and the like.

If you ask me for a facile check list of do's & don'ts, I won't give it to you.

Instead, I'll stick a baton of reliable information in your hand, so you run the next lap in the advancement of our knowledge of how to communicate science in a democracy. I'll even time you!

Fourth, science communication is a public good.

Clean air and water confer benefits independent of individuals' contributions too them. Indeed, individuals' personal contributions to clean air and water tend not to benefit them at all -- it's what others, en masse, are doing that determines whether the air and water are clean.

Same thing with the science communication environment. We all benefit when ordinary citizens form accurate judgments about what the best evidence is on issues like climate change. Accordingly, we all benefit when we live in an information environment free of toxic social meanings. But the judgments any ordinary person forms, and the behavior he or she engages in that amplify or mute toxic meanings -- those have zero impact on him or her.

As a result, he or she and every other individual like him or her won't have sufficient incentive to contribute. There has to be collective provisioning of such goods.

We need government policy for protection of the science communication environment every bit as much we need it to protect the physical environment.

There's an importnat role for key entities in civil society too -- like universities and foundations.

NAS is modeling the active, collective provisioning of this good.  Many others must now follow its lead!

Sunday
May062012

Some data on CRT & "Republican" & "Democratic brains" (plus CRT & religion, gender, education & cultural worldviews)

This is the latest in a series of posts (see here, here, here, here ...) on the relationship between ideology &/or cultural worldviews, on the one hand, and cognitive reasoning dispositions, on the other.

I've now got some new data that speak to this question -- & that say things inconsistent with the increasingly prominent claim that conservative ideology is associated with low-level information processing.

If you already know all about the issue, just skip ahead to "2. New data"; if you are new to the issue or want a brief refresher, read "1. Background" first.

1. Background

As discussed in a recent post, a series of studies have come out recently that present evidence--observational and (interestingly!) experimental--showing that the tendency to use heuristic or system 1 information processing ("fast" in Kahneman terms, as opposed to "slow" systematic or system 2) is associated with religiosity.

I expressed some agitation on the absence of reported data on the relationship of system 1/system2 reasoning dispositions and ideology.

The source of my interest in such data is the increasing prevalence of what I'll call -- in recognition of Chris Mooney's role in synthesizing the underlying studies --  the Republican Brain Hypothesis (RBH). RBH posits a relationship between conservative political positions and use of low-effort, low-quality, biased, etc. reasoning styles. RBH proponents--  Mooney in particular-- conclude that this link makes Republicans dismissive of policy-relevant science and is thus responsible for the political polarization that surrounds climate change.

Although I very much respect Mooney's careful and fair-minded effort to assemble the evidence in support of RBH, I remain unpersuaded. First, RBH doesn't fit cultural cognition experimental results, which show that the tendency to discount valid scientific evidence when it has culturally non-congenial implications is prominent across the ideological spectrum (or cultural spectra).

Second, as far as I can tell, RBH studies have all featured questionable measures of low-level information processing. The only validated measures of system 1 vs. 2 dispositions -- i.e., the only ones that has been shown to predict the various forms of cognitive bias identified in decision science -- are Shane Frederick's Cognitive Reflection Test (CRT) and Numeracy (CRT is a subcomponent of the latter).  The RBH studies tend to feature highly suspect measures like "need for cognition," which are based on study subjects' own professed characterizations of their tendency to engage in critical thinking.

So why are researchers who are interested in testing RBH not using (or if they are using, not reporting data on) the relationship between CRT & political ideology?

A few months ago, I reported in a blog post some data that suggested the being Republican and conservative has a small positive correlation with CRT. In other words, being a conservative Republican predicts being slightly more disposed to use systematic or system 2 reasoning.

The relationship was too small to be of practical importance -- to be a plausible explanation for political polarization on issues like climate change -- in my view. But the point was that the data suggested the opposite of what one would expect if one credits RBH!

The relationship between CRT and the cultural worldview measures was similarly inconsequential -- very small, off-setting correlations with Hierarchy and Individualism, respectively.

2. New data

Okay, here are some new CRT (Cognitive Reflection Test) data that reinforce my doubt about RBH (the "Republic Brain Hypothesis").

The data come from an on-line survey carried out by the Cultural Cognition Project using a nationally representative sample (recruited by the opinion-research firm Polimetrix) of 900 U.S. adults.

The survey included the 3-item CRT test, various demographic variables, partisan self-identification (on a 7-point scale), self-reported liberal-conservative ideology (on a 5-point scale) and cultural worldview items.

Key findings include:

  • Higher levels of education and greater income both predict higher CRT, as does being white and being male. These are all results one would expect based on previous studies.
  • Also consistent with the newer interesting studies, religiosity predicts lower CRT. (I measured religiosity with a composite scale that combined responses to self-reported church attendance, self-reported personal importance of religion, and self-reported frequency of prayer; α = 0.87).  
  • However, liberal-conservative ideology has essentially zero impact on CRT, and being more Republican (on the 7-point partisan self-identification measure; but also in simple binary correlations) predicts higher CRT. Not what one would expect if one were betting on RBH!
  • Being more individualistic than communitarian predicts higher CRT, being more hierarchical than communitarian predicts essentially nothing. Also not in line with RBH, since these cultural orientations are both modestly correlated with political conservativism.

Now, those are the simple, univariate correlations between the individual characteristics and CRT (click on the thumbnail, right, for the correlation matrix).

But what is the practical significance of these relationships?

 

To illustrate that, I ran a series of ordered logistic regression analyses (if you'd like to inspect the outputs, click on the thumbnail to left). The results indicate the likelihood that someone with the indicated characteristic would get either 0, 1, 2, or all 3 answers correct on the CRT test.

As illustrated in the Figures above, these analyses reveal that the impact of all of these predictors is concentrated on the likelihood that someone will get 0 as opposed to 1, 2, or 3 answers correct. That is, the major difference between people with the "high-CRT" characteristic and those with the "low-CRT" one is that the former are less likely to end up with a goose egg on the test.

Indeed, that's all that's going on for both religiosity and partisan self-identification; there's no significant (& certainly no meaningful!) difference in the likelihood that those who are high vs. low in religiosity, or who are Republican in self-identification vs. Democrat, will get 1, 2 or 3 answers correct--only whether they will get more than 0.

The likelihood of getting 1 or 2 correct, but not 3, is higher for men vs. women and for more educated vs. less educated individuals. But the differences -- all of them -- look pretty trivial to me. (Not that surprising; few people are disposed to engage in system 2 reasoning on a consistent basis.)

Note, too, that there's essentially no difference between "hierarchical individualists" and "egalitarian communitarians," the members of the cultural communities most divided on environmental issues including climate change. Also none when liberal-conservative ideology and party affiliation are combined.

These are models that look at the predictors of interest in relation to CRT but in isolation from one another. I think it's easy to generate a jumbled, meaningless model by indiscriminatingly "controlling" for co-variates like race, religiosity, and even gender when trying to asses the impact of ideologies and cultural worldviews, or to "control" for ideology when assessing the impact of worldviews or vice versa; people come in packages of these attributes, so if we treat them as "independent variables" in a regression, we aren't modeling people in the real world (more on this topic in some future post).

But just to satisfy those who are curious, I've also included a "kitchen sink" multivariate model of that sort. What it shows is that religion, race, education, and income all predict CRT independently of one another and independently of ideology and cultural worldivews. In such a model, however, neither ideology nor cultural worldviews predict anything significant for CRT.

3. Bottom line

So to sum up -- when we use CRT as the measure of how well people process information, there's no support for RBH. In fact, the zero-order effect for political-party affiliation is in the wrong direction. But the important point is that the effects are just too small to be of consequence -- too tiny to be at the root of the large schisms between people with differing ideological and cultural worldviews over issues involving policy-relevant science.

What does explain those divisions, I believe, is motivated reasoning, a particular form of which is what we are looking at in studies of cultural cognition.  

The lack of a meaningful correlation between CRT, on the one hand, and cultural worldviews and political ideologies, on the other, is perfectly consistent with this explanation for risk-perception conflicts, because the evidence that supports the explanation seems to show that motivated reasoning is ample across all cutural and ideological groups.

Indeed, motivated reasoning, it has long been known (although recently forgotten, apparently), affects both system 1 (heuristic) and system 2 (systematic reasoning).  Accordingly, far from being a "check" on motivated reasoning, a disposition to use system 2 more readily should actually magnify the impact of this sort of distortion in thinking.

That's indeed exactly what we see: as people become more numerate -- and hence more adept at system 2 reasoning -- they become even more culturally divided.

To be sure, being disposed to use heuristic reasoning -- or simply unable to engage in more technical, systematic modes of thought -- will produce all sorts of really bad problems. But the problem of cultural polarization over policy-relevant science just isn't one of them.

In my opinion, the sooner we get that, the sooner we'll figure out a constructive solution to the real problems of science communication in a diverse, democratic society.

Saturday
May052012

Krugman acknowledges cultural cognition (at least in others!)

The point of the cool Justin Fox post that I noted yesterday now has been seconded by Paul Krugman, who says he already knew this -- that cultural cognition contrains public acceptance of scientific evidence -- based on the failure of his own columns to persuade people who disagree with him:

Justin Fox has an interesting post documenting something I more or less knew, but am glad to see confirmed: People aren’t very receptive to evidence if it doesn’t come from a member of their cultural community. This has been blindingly obvious these past few years.

Consider what the different sides in economic debate have been predicting these past six or seven years. If you got your views from, say, the Wall Street Journal editorial page, you knew – knew – that there was no housing bubble, that America in 2008 wasn’t in recession, that budget deficits would send interest rates sky-high, that the Fed’s expansion of its balance sheet would produce huge inflation, that austerity policies would lead to economic expansion.

That’s quite a record. And yet I’m well aware that many people – including people with real money at stake – consider the WSJ a reliable source and people like, well, me flaky and unbelievable. Much of this is politics, of course, but that’s intertwined with culture: the kind of people who turn to the WSJ, or right-wing investment sites can clearly see that I’m a latte-sipping liberal who probably favors gay rights and doesn’t worship the financially successful (I actually prefer good filter coffee, black, but that’s otherwise accurate), and just not part of their tribe.

I suppose that in my quest to improve policy and understanding I should be trying to fit in better – lose the beard, learn to play golf, start using “impact” as a verb. But I probably couldn’t pull it off even if I tried. And as a result there will always be a large group of people who will never be moved by any evidence I present.

Friday
May042012

Blind Voter-Candidate Matchmaking Site to Reduce Partisan Bias in Voter Perception?

I'm eager to hear your reactions to Elect Your Match!, a website that would blindly match voters to presidential candidates based on the similarity of their responses to a series of policy statements. The voters and candidates respond to the same series of statements on a scale of slightly/moderately/strongly disagree or agree. The statements are candidate generated: they each submit five statements on separate issues, and respond to their own and their opponents’ statements on the same scale as voters, indicating whether they slightly/moderately/strongly disagree or agree with each one. The statements would not mention candidate or party identity. In choosing these statements, candidates define the primary policy issues at stake in their campaign.

There are sites making very good efforts along these lines (mentioned in the article), providing thorough information and showing visitors how candidates relate to their stance issue-by-issue, as well as generating a match based on any range of issues the visitor selects. Elect Your Match! would simplify these models to route visitors through one short standardized questionnaire that sets forth the primary election issues, defined by the candidates themselves, and only recommending one comprehensive best-matching candidate. Simplifying the site's primary interface to give only one comprehensive match based on a preset agenda might make it easier and more appealing for those less engaged in politics, who may not have a sense of what issues are most important to them or to the election. In order for the site to provide a single candidate match based on a preset agenda, it is important that the candidates to themselves set the agenda defining the issues and provide their own responses, as opposed to a third-party determining the issues and rating the candidates’ positions. 

In addition to informing voters, a site like this could work to reduce partisan identity biasing voters' perceptions of candidates. I.e., Studies suggest that voters overestimate the extent that the positions of candidates sharing their partisan identity match their own policy preferences. In other words, voters erroneously “see their favorite candidates’ stands as closer to their own and opposing candidates’ stands as more dissimilar than they actually were.” Larry M. Bartels, The Irrational Electorate, The Wilson Quarterly (Autumn 2008). Or that voters more readily learn information about candidates that is congenial to their partisan identity, and discount facts that are not. Jennifer Jerit & Jason Barabas, Partisan Perceptual Bias and the Information Environment, Presented at the 2011 annual meeting of the Southern Political Science Association.

I’m curious about how a this advances the goals of the CCP: On one hand, it informs voters as to the candidate that really best matches their own outlook, and aims to minimize partisan identity-based bias in evaluating candidates. On the other hand, one seeking to advance the goals of CCP might desire a means for promoting more interpersonal deliberation (that could perhaps do more to update viewpoints and build consensus around polarizing issues in the election)(See also Bruce Ackerman & James Fishkin, Deliberation Day (2004)). As is suggested in the article, the site might have a deliberative component that allows interested visitors to browse more deeply than the primary questionnaire, to enter issue-specific segments of the site that would prompt them to interact with or respond to statements presenting arguments on either side of the issue. Perhaps these issue-specific segments could host an ongoing conversation posting visitors’ comments and responses to arguments on either side of the issue.

Friday
May042012

Cultural cognition & expert assessments of technological innovation

There's a great blog post by Justin Fox over at the Harvard Business Review's HBR Blog.

Fox argues that cultural cognition dynamics are likely to influence not only public perceptions of risk but also market-related assessments and decisionmaking within groups one might expect to be more focused on money and data than on meaning.

As illustration, he offers an amusing (for the reader) account of the reception afforded a recent column of his on expert assessments of technological innovation in the internet era.

I wrote a post here at hbr.org on whether the Internet era has been a time of world-changing innovation or a relative disappointment. It was inspired by comments from author Neal Stephenson, who espoused the latter view in a Q&A at MIT. His words reminded me of similar arguments by economist Tyler Cowen (if I had enough brain cells to remember that Internet megainvestor Peter Thiel had been saying similarthings, I would have included him, too). So I wrote a piece juxtaposing the Stephenson/Cowen view with the work of MIT's Erik Brynjolfsson, who has been amassing evidence that a digitization-fueled economic revolution is in fact beginning to happen.

If I had to place a bet in this intellectual race, it would be on Brynjolfsson. I've seen the Internet utterly transform my industry (the media), and I imagine there's lots more transforming to come. But I don't have any special knowledge on the topic, and I do think the burden of proof lies with those who argue that economic metamorphosis is upon us. So I wrote the piece in a tone that I thought was neutral, laced with a few sprinklings of show-me skepticism.

When the comments began to roll in on hbr.org, though, a good number of them took me to task for being a brain-dead, technology-hating Luddite. And why not? There's a long history of journalists at legacy media organizations writing boneheaded things about the Internets being an abomination and/or flash in the pan (one recent example being this screed by Harper's publisher John McArthur). Something about my word choices and my job title led some readers to lump me in with the forces of regression, and react accordingly.

When I saw that Wired.com had republished my post, I cringed. Surely the technoutopians there would tear the piece to nanoshreds. But they didn't. Most of the Wired.com commenters instead jumped straight into an outrage-free discussion of innovation past and present.

That's probably because, if there is one person in the world whom Wired.com readers consider a "knowledgeable member of their cultural community," it is Neal Stephenson. This is the man who described virtual reality before it was even virtual, after all. I'm guessing that Wired.com readers were conditioned by the sight of Neal Stephenson's name at the beginning of my post to consider his arguments with an open mind. Here at hbr.org, where we don't require readers to have read the entire Baroque Cycle before they are allowed to comment, Stephenson was just some guy saying things they disagreed with.

Fox's assessment of the tendency of people to credit arguments of experts with whom they have a cultural affinity is consistent with our HPV study. But what's really cool is that the reaction of the Wired.com readers shows how a group that might be culturally predisposed to reject a particular message will actually give it open-minded consideration when they see that it originates (or at least has received respectful and serious attention) from someone with whom they identify.

Anyway, I'm psyched to learn that Fox sees our methods and framework as relevant to the market-related phenomena he writes on -- not only because it's cool to think that cultural cognition can shed light on those things but also because I really loved his Myth of the Rational Market. Was tied (with The Clockwork Universe) for best book I read all of last yr!

Saturday
Apr282012

A "frame" likely to generate consensus that climate change is not happening (and/or that geoengineering is safe)

Interesting piece, my guess is that this idea could actually end polarization over climate change -- by furnishing egalitarians and hierarchs alike strong emotional motivation to deny there's any danger after all! 

Also, although the author maintains that engineering humans is "safer" than geoengineering, my guess is that people would see geoengineering itself as less risky when they consider it in relation to "human engineering" than when they consider it on its own  -- precisely b/c human engineering is pretty much the creepiest thing that anyone can imagine.

Which isn't to say the author's argument is wrong on the merits!

 

Friday
Apr272012

More religion & CRT--where's ideology & CRT?!

Science this week published an article that finds low CRT predicts religiosity & that backs this finding up w/ experimental data:

It's a really excellent study. The experiments were ingenious. It should be pointed out, though, that this finding corroborates another excellent one, Shenhav, A., Rand, D.G. & Greene, J.D. Divine intuition: Cognitive style influences belief in God. Journal of Experimental Psychology (2011), advance online doi:10.1037/a0025391.

I'm waiting, patiently, for someone to publish some data on correlation between CRT & liberal-conservative ideology. As I've noted before, data that CCP has collected suggests that there is virtually none -- or that there are weak offsetting correlations between different cultural dimensions of conservatism (hierarchy & individualism).

The reason I'm waiting is that such data would contribute a lot to the increasing interest in the relationship between ideology & quality/style of cognitive processing (the Republic Brain hypothesis or "RBH," let's call it). Shane Frederick's CRT scale & Numeracy (which incorporates CRT) are the only validated indicators of the disposition to use systematic or System 2 reasoning as opposed to heuristic or system 1. So it would, of course, be super useful to see what the CTR verdict is on whether conservatives & liberals differ in processing.

Being patient while waiting is becoming more difficult. I've got to believe that such evidence is already in hand; given the interest in the RB hypothesis, surely someone (likely multiple people) have thought to try to test it w/ the CRT measure. It would be sad to discover that the reason the data haven't been reported is that they don't fit the hypothesis -- that is, don't show that liberals are more "systematic" or System-2 disposed in their thinking. 

Actually, I suppose I have data in hand, but at least I've blogged on them!

Oh-- if I'm wrong to think that this is a matter on which no one has yet presented data, please tell me and I'll happily acknowledge my error & share the relevant references w/ other curious people. 

Saturday
Apr212012

Deliberations & identity formation

CCP member John Gasitil, along w/ co-authors, has a new article out presenting evidence that highly participatory forms of democratic deliberation promote a distinctive shared identity that transcends more particular and potentially divisive ones, such as those founded on cultural affiliations.

The analysis was largely qualitative: a case study based on impressionistic analyses of transcripts from citizen deliberations associated with the Australian Citizens' Parliament. I know JG has more data on the Australian Citizens' Parliament, including some that admit of more systematic analysis, in hand. Good way to do research since the convergence of results from more interpretive forms of empirical analysis and more quantitative -- if they do indeed converge! -- make the conclusions of both more worthy of being credited.

I know from experience that collective deliberations on baseball are not sufficient to enable Gastil to transcend his partisan cultural identity as a Tigers fan.

Felicetti, A., Gastil, J., Hartz-Karp, & Carson, L. Collective Identity and Voice at the Australian Citizens' Parliament. Journal of Public Deliberation 8, article 5 (2012):

This paper examines the role of collective identity and collective voice in political life. We argue that persons have an underlying predisposition to use collective dimensions, such as common identities and a public voice, in thinking and expressing themselves politically. This collective orientation, however, can be either fostered or weakened by citizens’ political experiences. Although the collective level is an important dimension in contemporary politics, conventional democratic practices do not foster it. Deliberative democracy is suggested as an environment that might allow more ground for citizens to express themselves not only in individual but also in collective terms. We examine this theoretical perspective through a case study of the Australian Citizens’ Parliament, in which transcripts are analyzed to determine the extent to which collective identities and common voice surfaced in actual discourse. We analyze the dynamics involved in the advent of collective dimensions in the deliberative process and highlight the factors—deliberation, nature of the discussion, and exceptional opportunity—that potentially facilitated the rise of group identities and common voice. In spite of the strong individualistic character of the Australian cultural identity, we nonetheless found evidence of both collective identity and voice at the Citizens’ Parliament, expressed in terms of national, state, and community levels. In the conclusion, we discuss the implications of those findings for future research and practice of public deliberation.

 

Thursday
Apr192012

Ethical guidelines for science communication informed by cultural cognition research

People often express concern to me about the normative implications of research that identifies how cultural cognition influences perception of risk and related facts and how those influences can be anticipated in structuring science communication.

I am glad they are concerned, because I am, too. If I thought that people who consume our research did not reflect on such concerns, I'd be even more worried about what I do. Knowing that others see normative issues here also means that I can share with them my own responses & see if they think I've got things right &/or can do better.

Some "Guidelines" follow. But they are not really "guidelines" in the sense of a codified set of rules or standards (I'm skeptical, in fact, that anything morally complicated can be handled with such things). Rather, they are more like prototypes that when considered together reflect what for me seems the right moral orientation to our work.  Would be happy to receive & post additional "guidelines" of this nature (along w/ any commentary their authors wish to append) & also grateful to receive feedback from anyone who takes issue with any of these or with the attitude/orientation they are meant to convey.

1. No lying. No need for elaboration here, I trust.

2. No manipulation. Likely also self-explanatory, but an example might be useful. Consider how Merck tried to shape public opinion toward Gardasil, its HPV vaccine: by using secret campaign contributions to "persuade" a southern, religious, conservative politician -- Texas Governor Rick Perry -- to issue an executive order mandating vaccination of middle school girls.

It was fine for Merck to try to assure that parents would learn about the benefits of the vaccine. It wasn't even wrong for it to enlist communicators whose cultural identities would make them credible sources of sound information

But it should have been open that it was trying to engage people this way.

Obviously, the whole immoral plan blew up in Merck's face--actually generating distrust of Gardasil among a diverse range of cultural groups. Nice work, gun-for-hire, private-industry counterparts of those who study the science of science communication in order to promote the common good!

But the strategy would have been wrong even if Merck had gotten away with it because it was managing the information environment in a way that the message recipients would themselves have resented. They were using people's reasoning, not enabling people's reasoning.

3. Use communication strategies and procedures only to promote engagement with information--not to  induce conclusions. Some people say that cultural-cognition informed communication strategies are a form of "marketing." Fine, I say. So long as what's being marketed is not a preferred position on an issue of science & policy but rather a decisional state or climate in which people who want to make decisions based on the best available scientific information are most likely to take note of and give open-minded consideration to it. 

The HPV-vaccine disaster again supplies an example. Parents of all cultural worldviews want to have the best available information on how to promote the health of their children. It would be perfectly fine, in my view, for a communicator to use cultural cognition research to identify how to promote open-minded engagement with information on the HPV vaccine.  

So if public health officials self-consciously decided to rely on a culturally diverse array of honestly motivated science communicators in order to forestall creation of any perception that positions on the vaccine were aligned asymmetrically with cultural outlooks--that would have been okay.

Also would have been okay to have resisted Merck's stupid, market-driven decision to seek fast-track approval of a girls-only vaccine and to promote inclusion of it on the schedule of mandatory school vaccinations--a marketing strategy that made cultural polarization highly likely.  Parents who love their children wouldn't want to be put into a communication environment in which their honest assessment of the health needs of their daughters or sons would be distorted by culturally antagonistic meanings unrelated to health.

4. Use strategies and procedures to promote engagement only when you have good reason to believe that engagement fits the aims and interests of information recipients. Parents trying to decide what is the best health interests of their children want to engage the information from the mindset that best promotes an accurate assessment of the evidence. But sometimes people want to engage information in a way that reliably connects them to stances that fit their cultural style. Leave them alone; so long as they aren't hurting anyone else, they are entitled to manage their personal information environment in a way that promotes contact with their own conception of the good life.

5. Don't help anyone who has ends contrary to these guidelines. Like, say, a pharmaceutical company that in its drive to make a buck is willing to manipulate people by covertly inducing individuals they trust to vouch for the effectiveness and safety of some treatment.

6.  Do help anyone -- regardless of their cultural worldview -- who is genuinely seeking to promote reflective engagement with information when such engagement fits the interests and aims of recipients. Like, say, a pharmaceutical company that wants to make a buck by openly and without manipulation satisfying the interest that people have in being able to consider scientifically valid information about the effectiveness and risks of a vaccine. 

Tuesday
Apr172012

MPSA climate change panel: report & slides

On Friday I was on a Midwest Political Science Association panel on public opinion & climate change. I presented Tragedy of the Risk Perceptions Commons (slides here). 

Michael Tesler presented interesting data that he argued show that elite rhetoric and not motivated cognition accounts for political divisions on climate change. I have a hard time conjuring the psychological model that would see the two operating independently of each other; to me they are not discrete mechanisms, but steps in a process (elite cues help create/transmit the meanings that then motivate cognition for ordinary individuals) & I wasn't sure exactly how the data supported the inference, but I'm eager to see the write up, at which point I'll either get it or explain why I don't think he is right!

Alexandra Bass presented data on media content to show that values influence climate change perceptions. The presentation was great. But I have to say I don't really get media-content studies in general; they seem to draw inferences the validity of which depend on the ratio of frequency of content to frequency of events in the world--something for which the analyses never present any data. I didn't get a chance, though, to read Bass's paper, so I will, & see if that helps me.

Mathew Nowlin, a member of Hank Jenkins-Smith's amazing risk-perception group at theCenter for Applied Social Research at the University of Oklahoma, presented a cool paper on education, climate change knowledge, and politcal polarization.

Finally, Rebecca Bromley-Trujillo backed data out of the American National Election Study to support the hypothesis that "core political values"-- "such as equality"-- "are an important predictor of climate change attitudes, beyond other standard determinants of political attitudes, like partisanship or ideology." I found the cliam convincing, but I was admittedly predisposed to believe it.

Monday
Apr162012

Where is "what does Trayvon Martin case mean, part 3"?

It's coming soon. But not before I get done learning from my class what they think. I also learned a lot from Randy Kennedy's lecture at Leslie College last week. I hope he writes up his lecture so that others can think about his reflections as well (I'm sure I'll say more about Kennedy in "part 3").

Saturday
Apr142012

Cultural cognition--plus lots of other relevant things-- & nuclear energy: experts *get it*

Came across a great blog on public perceptions of nuclear risk at the Neutron Economy & then found a thoughtful reaction to it at Areva North America: Next Energy Blog.

In addition to being well-crafted and informative, the posts were immensely heartening.

Written by and for people who do work relating to nuclear energy, both displayed keen awareness of the science of public risk perceptions and science communication. (Cultural cognition was  featured, but was--very appropriately--not the only dynamic that was addressed.)  

What's more, rather than the frustrated hand-wringing and finger-pointing that experts (and many others) often (understandably but not helpfully) display when confronted with public controversy over risk, both evinced an uncomplaining, matter-of-fact dedication to making sense of how the public makes sense of the world.

From Neutron Economy:

To summarize - providing education and facts are good, useful even - but on their own insufficient without presenting those facts in a context which engages with the deeply-held values of the audience. To produce actual engagement - and even inducement to support - requires a producing a context of facts compatible with the values of those one is trying to reach. In other words, for the case of nuclear, it means going beyond education and comparative evaluation of risk (again, to emphasize, both of which are valid in and of themselves) and placing these within the framework of how this speaks to the values of the audience....

[I]it is the job of the nuclear professionals (as members of the "technical community") to do our best to provide an accurate technical framework for these evaluations of risk by the public, such that they can make the most sound decisions on risk. Meanwhile it is the job of nuclear communicators and advocates to speak to values, as to produce more fair evaluations of both the benefits and risks of nuclear, particularly in the context of available energy choices.

From Areva North America: Next Energy Blog

So, “pure” facts don’t tend to change our minds very often. And surprisingly, presenting facts alone when encouraging a new perspective can often result in the opposite effect on people who disagree....

Which naturally leads to our next question, “If cultural influence is so strong on perceiving facts, is trying to educate people of the beneficial facts about nuclear energy hopeless?”

We agree with Steve’s answer, “Not at all.”

But the key is to frame our factual and technically accurate answers within the cultural framework understanding of those we are trying to engage.

Reading these words made me believe that it is not at all unrealistic to anticipate that the practice of science will in the not too distant future be happily and productively integrated with the science of science communication.