follow CCP

Recent blog entries
Friday
Jun212013

How religiosity and science literacy interact: Evolution & science literacy part 2

This is the second of two posts on science literacy and evolution.

And religion.

And liberal democratic society as the naturally congenial but sometimes precariously raucous—or maybe better, simultaneously congenial and precarious because naturally raucous—home for science.

And how the common misunderstanding of what public “disbelief” in “evolution” truly signifies can actually interfere with popular dissemination of scientific knowledge.  Plus compromise norms of respect for cultural pluralism that are essential to the practice of liberal democracy.

See? Get it?

Okay, well, in the last post I described the vast body of long established but persistently--weirdly--ignored work that social scientists have amassed on the relationship between public “disbelief” in evolution and public understanding of evolution and other basic elements of science.

That work shows that there  isn't any relationship. What people say they “believe” about evolution is a measure of who they are, culturally.  It’s not a measure of what they know about what’s known to science.

Indeed, many people who say they “believe” in evolution don’t have the foggiest idea how the modern synthesis hangs together. Those who say they “disbelieve” are not any less likely to understand evolutionary theory--but they aren't any more more likely to either.

That so few members of the public have a meaningful understanding of the workings of genetic variance, random mutation, and natural selection (the core elements of the modern synthesis) is a shame, and definitely a matter of concern for the teaching of science education.

But it’s a problem about what people “know” and not what they say they “believe.” What people say they "believe" and what they "know" about evolution are vastly different things. That's what the ample scientific evidence on public understandings of science show.

In this post I want to add a modest increment of additional evidence corroborating this important point.

The evidence has to do specifically with the relationship between religion, science literacy, and belief in evolution.

The evidence is from a survey of 2,000 US adults recruited and stratified in a manner designed to assure national representativeness. 

The survey instrument included the NSF science indicators.

It also contained various measures of religiosity, including regularity of church attendance; regularity of prayer; and perceived “importance of God” in one’s life. These cohered in a manner that enabled them to be formed into a reliable “religiosity” scale.

And the survey contained an item that Gallup and other pollsters routinely use to measure the public’s “beliefs” about evolution.

What do these data show?

Well, I’ll state in summary form what I regard as the findings of interest, and then supply the supporting details:

1.   Neither the “Evolution” nor the “Big Bang” items in the NSF’s "Science Indicators" battery can plausibly be viewed as reliably measuring “scientific literacy” in subjects who are even modestly religious.

2. When subjects who are highly science literate but highly religious answer “False” to the NSF Indicator’s Evolution item, their response furnishes no reason to infer that they lack knowledge of the basic elements of the best scientific understanding of evolution.

3. For respondents who are below average in religiosity, a high score in “science literacy” predicts a higher probability of “believing” in “Naturalistic Evolution”—and so does a low score!

4. For those who are above average in religiosity, a high score in science literacy doesn’t predict a higher probability of believing in Naturalistic Evolution. But it does predict a higher probability of believing in Theistic Evolution.

5.  A higher score in science literacy predicts a lower probability of believing in Young Earth Creationism—whether respondents are below or above average in religiosity.

Okay. Here are the specifics.

1. In general, religiosity (measured, as I said, by aggregating items on church attendance, frequency of prayer, and perceived personal importance of God) is correlated negatively with science literacy.

But the effect is modest. The large overlap in the density distribution plots to the left makes it clear that the portions of population “above” and “below average” in religiosity (“AARs” and “BARs,” let’s call them) both comprise individuals of a wide range of scores on the NSF science literacy battery.

Or at least they do when one leaves Evolution and Big Bang out of the tally, as the NSF itself decided to do in 2010, and & as I have here. To make the science literacy scale more reliable and discerning, I’ve added items from the Indicators' “science process” battery, which tests knowledge relating to probability and validity of experimental methods.

Consider, though, how AARs and BARs scoring in the top 50% of the science literacy test so measured respond to Evolution and Big Bang:

The difference in the percentages of the two moderately “science literate” groups who answer “true” to these questions is stunningly high. 

Now one can use even more intricate statistical tests—ones involving, say, Cronbach’s alpha, factor analysis, and structural equation modeling—to convincingly show that Evolution and Big Bang are not measuring the same latent proficiency in acquiring scientific knowledge as are the remaining NSF Indicator items. 

But nothing more intricate than this discrepancy in the performance of modestly science literate AARs and BARs is necessary to see that these two items aren’t a valid measure of science literacy in the former.

2. The NSF Indicators test of science literacy is far from perfect, but I think it’s reasonable to infer that people who do above average have acquired more understanding of basic science knowledge than those who score below average.

I doubt that a majority of BARs who score in the top 50% of the NSF Indicator battery (sans Evolution and Big Bang and avec the process items) know the basic elements of the theory of evolution, including the role that genetic variance, random mutation, and natural selection play in it. 

But I think more of them are likely to understand those things than BARs who score in the bottom 50%.

By the same token, there’s reason to believe that AARs who score in the top 50% on the NSF science literacy test are more likely to have acquired an elementary knowledge of evolutionary theory than those—BARs or AARs—who score in the bottom 50%.   

Nothing in how the above-average science literacy AARs answer the Evolution item furnishes any reason to doubt this. How they respond to that item, I’ve just pointed out, is not, for them at least, a measure of what they know about science.  And in any case, as has been established by researchers on multiple occasions, there’s zero correlation between whether one says one “believes in” evolution and whether can give a passable account of the modern synthesis.

3. Now let’s consider what we can learn from the responses to the “popular opinion poll” item on beliefs in evolution.

That item asks respondents to indicate “which one of the following statements comes closest to your views on the origin and development of human beings—” 

  • Humans developed over millions of years from less advanced forms of life, but God guided this process
  • Human beings have developed over millions of years from less advanced forms of life, but God had no part in this process; or
  • God created human beings pretty much in their present form at one time within the last 10,000 years or so." 

Let’s call these responses “Theistic Evolution,” “Naturalistic Evolution,” and "Young Earth Creationism," respectively.

Theistic Evolution was the most popular response but by was supported by only a plurality (38%). Young Earth Creationism was second and Naturalistic (or "Godless") Evolution third but the proportions who selected each differed by only a slight amount (32% vs. 29%, respectively).

These numbers, by the way, differ a bit from what Gallup tends to report. The percent selecting Theistic Evolution is in consistent with that. But Godless Evolution runs closer to Young Earth Creationism than it does in Gallup polls.

What to make of this? Well, I’ll write a blog soon about the validity of on-line public opinion samples. But suffice it to say that based on the predictive accuracy of surveys conducted by YouGov, the premier on-line survey firm that recruited the sample for this study, and surveys conducted by Gallup in the 2010 and 2012 elections, YouGov is probably getting closer to the “true” general population values.

What we are interested in, though, is how science literacy and religiosity influence selection of these responses.

Consider first the relationship between these responses & science literacy.

Whoa ... the Jesus fish symbol popped out of my regression!

Maybe not shocking but note that support for Naturalistic peaks at only about 55% even among the most science literate. The relationship between support and for that position and science literacy, moreover, is “U”-shaped—higher at both the low and high ends. This relationship was confirmed by a multinomial logistic regression with appropriate quadratic terms; the fitted values from that regression are what I’m graphing (these plots are very true to what one would see in the “raw” data).

Now add religiosity. The following plots contrast the probabilities that AARs and BARs will select one or another of the response to the popular pollster item. They are derived from the same multinomial logistic regression, which confirmed that the impact of science literacy on the probability of selecting one response or another varies depending on level of religiosity.

It’s clear that the “U”-shaped relationship between science literacy and believing in Naturalistic Evolution is being driven by BARs.

In other words, BARs are more likely to believe in Naturalistic Evolution as they become either extremely science literate or extremely science illiterate!

Is this a surprise? Well, I wasn’t expecting this. My inspection of the data was pretty much exploratory, without strong hypotheses.

But I was reminded of a finding in what I regard as one of the very best studies of how high-quality instruction in the teaching of evolutionary theory generates improvements in knowledge but not changes in belief

In the study, Anton Lawson and collaborators found that high school students, particularly those scoring highest in critical reasoning skills, readily acquired knowledge of various aspects of evolution through instruction, but that acquisition of such knowledge did not produce a corresponding shift in belief among the students who began as nonbelievers.  

Nevertheless, the subgroup of such students who did back away from two particular beliefs hostile to naturalistic evolution (that the “living world is controlled by a force greater than humans” and that “all events in nature occur as part of a predetermined master plan”) consisted of the students who scored the lowest in critical reasoning skills. 

Speculating on why, Lawson et al. noted that “experience tells us that people change their beliefs for other than rational reasons. For example, hearing the opinion of an acknowledged authority figure could cause one to change a belief. Perhaps intuitive [students] are more likely than reflective students to change their beliefs for this reason.”

Lawson et al. don’t themselves explicitly suggest this, but a consistent conjecture might be that students who are higher in critical reasoning skills might be more inclined to push back on identity-threatening “beliefs” (even while taking on more knowledge) than those who are less reflective. That would be consistent with findings that motivated reasoning can be amplified by science literacy and cognitive reflection.

Someone should do a study to test that hypothesis!

4.  For AARs, in contrast, an increase in science literacy does not predict belief in Naturalistic Evolution. On the contrary, it seems to predict a slight decrease, although the effect is pretty much zero for all but those AARs whose scores are quite low.

So much for the idea that “disbelief” in evolution is a sign of low science literacy.  It isn’t.  “Disbelief” is just as consistent with being high in science literacy as low.

The only thing “disbelief” in Naturalistic Evolution reliably signifies is that one is religious.  This is consistent with the hypothesis that evolution “beliefs” are actually measures of cultural identity (as reflected in religiosity).

This conclusion is strongly corroborated by the relationship between science literacy and the increased probability of believing in Theistic Evolution among AARs. Offered the opportunity—as they aren’t in the NSF Science Indicators science knowledge battery—to select a position simultaneously consistent with “belief” in evolution and religious identity, the most science literate AARS grab hold of it!

5. Indeed, those same subjects—AARs who score high in science literacy—are less likely to espouse Young Earth Creationism than their less science literate counterparts.

What does this tell us? I suppose other interpretations are possible, but I’d say that AARs high in science literacy are in fact eager to affirm their “belief” in evolution, so long as they can be presented with a means of doing so that doesn’t denigrate their cultural identities.

Not surprisingly, BARs also less likely to express support for Young Earth Creationism as they become more science literate.

Support for Young Earth Creationism is associated disproportionately with being simultaneously above average in religiosity and below average in science literacy.

* * * * *

Some concluding thoughts:

1. “Disbelief” in evolution doesn’t reflect a deficiency in science literacy or shortcomings in science education in our society.  

I think it is very reasonable to think members of our society are not as science literate as they should be, and also that our education system must do better in imparting scientific knowledge to citizens generally. 

But it’s wrong to think that the level of “disbelief” in evolution is evidence of those things.  It’s wrong to think that because that view is contrary to empirical evidence.

The evidence that many researchers have compiled and that I’ve added to in a very modest way here show overwhelmingly that an individual's unwillingness to profess “belief” in evolution doesn't indicate science illiteracy or her unfamiliarity with the rudiments of evolutionary theory. 

It measures her expression of her cultural identity. What saying “I don’t believe in evolution” means, culturally speaking, is that one belongs to a community whose members subscribe to a particular set of understanding on best way to live.

2.  Those dedicated to the critical task of promoting scientific literacy, including public knowledge of the best scientific understanding of evolution, should not be focusing on what percentage of the population says they “believe” in evolution.

They shouldn’t be focusing on that because that information tells us nothing about how much scientific knowledge or even knowledge of evolution the public has.  Those who want to test how well society is doing in imparting knowledge of evolution should be measuring instead what fraction of the population can give a cogent account of genetic variance, random mutation, and natural selection. It’s pitifully small, among both those who say they “believe” in evolution and those who say they don’t.

But even more important, those who want to promote public acquisition of scientific knowledge should avoid making professions of “belief” in evolution their aim because doing so is much more likely to deter than promote acquisition of basic scientific knowledge.

People who have a religious identity—who include plenty of science literate people and people capable of becoming even more so—see profession of “belief” as denigrating their cultural identities.  Naturally, then, they will see the demand that they not only learn but publicly affirm their "belief” in evolution as an attack on their community by members of another who harbor a shared understanding of the best life hostile to theirs.

They’ll resent that.  And with good reason. It's appropriate--absolutely essential, even--that a liberal democracy oblige those who furnish the public good of education to impart to people of all cultural identities the best available understanding of how the universe works, including the career of life on earth.  But citizens who make it their business to force others who have cultural views different from theirs to submit to purely symbolic rituals of identity-abnegation are engaged in a noxious, fundamentally illiberal form of conduct.

Such behavior, moreover, predictably breeds motivated resistance to acquiring knowledge of what science knows. Fear of the loss of status associated with "assenting" to facts symbolically linked to the identity of a rival cultural group is exactly what blocks citizens from converging on the best scientific evidence on issues climate change, nuclear power, the HPV vaccine, and other culturally contested policies.

In their study of how effectively imparting knowledge of evolutionary theory does not produce “belief,” Anton Lawson & William Worsnop conclude:

Of course, every teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to do so, not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern-a pattern that appears to be necessary for independent learning and critical thought.

This is a sensible prescription for those who (very appropriately!) want to promote the widest dissemination of basic science knowledge in the general public.

But it also happens to be a prescription consistent with the basic liberal injunction to respect the entitlement of individual citizens to freely use their own reason both to understand what is known by science and to decide for themselves what constitutes a virtuous life.

The convergence of the two is not any sort of accident.  It reflects a deep truth about the reciprocal affinity of science and political liberalism.

Wednesday
Jun192013

What does "disbelief" in evolution *mean*? What does "belief" in it *measure*? Evolution & science literacy part 1

The idea that popular “disbelief in evolution” indicates a deficiency in “science literacy” is one of the most oft-repeated but least defensible propositions in popular commentary on the status of science in U.S. society.

It’s true only if one makes the analytically vacuous move of defining science literacy to mean “belief in evolution.”

It’s false, however, if one is interested in understanding, as an empirical matter, either what members of the public know about what is known to science or what the social meaning of “belief” in evolution is for members of culturally diverse groups.

Ultimately, I want to offer up some original data that helps to make my meaning clear.

But let’s start with some science of science communication basics. I’d be tempted to say they are ones that bear repeating over and over and over if I didn’t recognize that the persistence of disregard for them among popular commentators can’t plausibly be explained by the failure of those who have made or who are familiar with these findings to point them out time and again.

I start with these well-established findings, then, just so it will be clear what I see as the modest increment of corroboration and refinement to be added with the new data I'll describe.

Getting clear on what’s already known is what I’ll do in this post, which is part 1 of a 2-part series on evolution, ordinary science intelligence, religion, and (ultimately) how all of these are intertwined with the central constitutional difficulty of the Liberal Republic of Science. Part 2 is where I’ll get to the original data.

First, “believing in evolution” is not the same as “understanding” or even having the most rudimentary knowledge of science knows about the career of life on our planet. Believing and understanding are in fact wholly uncorrelated.

That is, those who say they “believe” in evolution are no more likely to be able to give a passable—as in high school biology passing grade—account of “natural selection,” “random mutation,” and “genetic variation” (the basic elements of the “modern synthesis” in evolutionary theory) than whose who “disbelieve.” Indeed, few people can.

Those who “believe,” then, don’t “know” more science than “nonbelievers.” They merely accept more of what it is that science knows but that they themselves don’t understand (which, by the way, is a very sensible thing for them to do; I’ve discussed this before).

Second, being enabled to understand evolution doesn’t cause people to “believe” in it.

It’s possible—with the aid of techniques devised by excellent science educators—to teach a thoughtful person the basic elements of evolutionary theory! Everyone ought to be taught it, not only because understanding this process enlarges their knowledge of all manner of natural and social phenomena but because seeing how human beings came to understand this process furnishes an object lesson in the awe inspiring-power of human beings to acquire genuine knowledge by applying their reason to observation.

But acquiring an understanding of evolution—that is, a meaningful comprehension of how the ferment of genetic variance and random mutation when leavened with natural selection endows all manner of life forms with a vital quality of self-reforming resilience—doesn’t make someone who before that time said they “disbelieved” evolution now say they “believe” it.

Empirical studies—ones with high school and university students—have shown this multiple times. Believe it or not. But if not, you are the one who closing your mind to insight generated by the application of human reason to observation.

Third, what people say they “believe” about evolution doesn’t reliably predict how much they know about science generally.

This is one of the lessons learned from use of the National Science Indicators.  

The Indicators, which comprise a wide-ranging longitudinal survey of public knowledge, attitudes, and practices, offer a monumentally useful font of knowledge for the study of science and society. Indeed, they are a monument to the insight and public spirit of the scientists (including the scientist administrators inside the NSF) who created and continue to administer it.

Integral the the Indicators is a measure of “science literacy” that has been standardly employed in the social sciences for many years. The Indicators include a “knowledge” battery—an inventory-like set of “facts” such as the decisive significance of the father’s genes in determining the sex of a child and the size of an electron relative to that of an atom.

The indicators include two true-false items, which state “human beings, as we know them today developed from earlier species of animals,” and “the universe began with a huge explosion,” respectively. Test-takers who consistently get 90+% of the remaining  questions on the NSF test correct are only slightly more than 50% likely to correctly answer these questions, which are known as “Evolution” and “Big Bang” respectively.

That tells you something, or does if you are applying reason to observation: it is that “Big Bang” and “Evolution” aren’t measuring the same thing as the remaining items. In fact, research suggests—not surprisingly—that they are measuring a latent or unobserved “religiosity” disposition that is distinct from the latent knowledge of basic science the remaining questions are measuring.

What people are doing, then, when they say they “believe” and “disbelieve” in evolution is expressing who they are. Evolution has a cultural meaning, positions on which signify membership in one or another competing group.

People reliably respond to “Evolution” and “Big Bang” in a manner that signifies their identities.  Moreover, many of the people for whom “false” correctly conveys their cultural identity know plenty of science.

Accordingly, many social scientists interested in reliably measuring how disposed members of the public are to come to know what’s known by science, particularly across place and time, have proposed dropping “Big Bang” and “Evolution" -- not from the survey regularly conducted by the NSF in compiling the Indicators, but from the scale one can form with the other items to measure what people know about what's known to science. 

This proposal has raised political hackles. How can one purport to measure science literacy and leave evolution and the big-bang theory of the origins of the universe out, they ask?  Someone who doesn’t know these things just is science illiterate!

Well, yes, if you simply define science literacy that way.  Moreover, if you do define it that way, you’ll be counting as “science literate” many people who harbor genuinely ignorant, embarrassing understandings of how evolution works.

Plus you’ll necessarily be dulling the precision of what is supposed to be an empirical measuring instrument for assessing what is known—since people who do know many many things will “say” they “don’t believe” in evolution. They'll say that even if they -- unlike the vast majority of the public who say they "believe" in evolution--are able to give an admirably cogent account of the modern synthesis.

Indeed, you’ll be converting what is supposed to be a measure of one thing—how much scientific knowledge people have acquired--into a symbol of something else: their willingness to assent to the cultural meaning that is conveyed by saying “true” to Evolution and Big Bang, as many people who do, and for that reason, without having any real comprehension of the science those items embody and without even doing very well on the remainder of the NSF Indicator battery.

Even then, the resulting “scale” won’t be a very reliable indicator of “identity,” since most of the remaining questions are ones that people whose identities are denigrated by answering “true” to Big Bang and Evolution are ones that bear no particular cultural meaning and thus don’t reliably even single out people of opposing cultural styles.

But insisting that the measure that social scientists use to study “science literacy” include Big Bang and Evolution under these circumstances will still convey a meaning.

It is that the enterprise of science is on one side of a cultural conflict between citizens whose disagreement about the best way of life in fact has nothing to do with the authority of science’s way of knowing, which in fact they all accept.

A “science literacy” test that insists that people profess “belief” in propositions that its citizens all understand to be expressions of cultural identity is really a pledge of allegiance, a loyalty oath to a partisan cultural orthodoxy.

Steadfastly insisting that the state teach its citizens what science genuinely knows  (about evolution, the origins of the universe, and myriad other things), and even more critically how science comes to know what it does, are essential to enabling culturally diverse people to attain happiness by means of their own choosing.

But insisting that they pledge allegiance to a particular cultural orthodoxy doesn't advance any of those ends.  Indeed, it subverts the very constitution of the Liberal Republic of Science.

 

Part 2.

Thursday
Jun132013

Science literacy & cultural polarization: it doesn't happen *just* with global warming, but it also doesn't happen for *all* risks. Why?

In one CCP study, we found that cultural polarization over climate change is magnified by science literacy (numeracy, too). That is, as culturally diverse members (but perfectly ordinary, and not particularly partisan) members of the public become more science literate, they don't converge on the dangers that global warming poses but rather grow even more divided.

Not what you'd expect if you thought that the source of the climate change controversy was a deficit in the public's ability to comprehend science.

But the culturally polarizing effect of science literacy isn't actually that unusual.  It's definitely not the case that all risk issues generate cultural polarization. But among those that do, division is often most intense among members of the public who are the most knowledgeable about science in general.

Actually, in the paper in which we reported the culturally polarizing effect of science literacy with respect to perceptions of climate change risks, we also reported data that showed the same phenomenon occurring with respect to perceptions of nuclear power risks.

Well, here are some more data that help to illustrate the relationship between science literacy and cultural polarization.  They come from a survey of a nationally representative sample of 2000 persons conducted in May and June of this year (that's right--even more fresh data! Mmmmmm mmmm!)




These figures illustrate how public perceptions of different risks vary in relation to science literacy. Risk perceptions were measured with the "industrial strength measure." Science literacy was assessed with the National Science Foundation's "Science Indicators," a battery of questions commonly used to measure general factual and conceptual knowledge about science. 

For each risk, I plotted (using a locally weighted regression smoother, a great device for conveying the profile of the raw data) the relationship between risk perception and science literacy for the sample as a whole (the dashed grey line) and the relationships between them for the cultural groups (whose members are identified based on their scores in relations to the means on the hierarchy-egalitarian and individualist-communitarian worldview scales) that are most polarized on the indicated risk

The upper-left panel essentially reproduces the pattern we observed and reported on in our Nature Climate Change study. Overall, science literacy has essentially impact on climate-change risk perceptions. But among egalitarian communitarians and hierarch individualists--the cultural groups who tend to agree most strongly on environmental and technological risks--science literacy has off-setting effects with respect to climate change and fracking: it makes egalitarian communitarians credit assertions of risk more, and hierarchical individualists less.

The same basic story applies to the bottom two panels. Those ones look at legalization of marijuana and legalization of prostitution, "social deviancy risks" of the sort that tend to divide hierarchical communitarians and egalitarian individualists.

Neither the level of concern nor the degree of cultural polarization is as intense as those associated with global warming and fracking. But the intensity of cultural disagreement does intensify with increasing science literacy (it seems to abate for legalization of prostitution among those highest in science litercy, although the appearance of convergence would have to be statistically interrogated before one could conclude that it is genuine).

What to make of this? Well, again, one interpretation --one supported by the study of cultural cognition generally--is that the source of cultural polarization over risk isn't plausibly attributed to a deficit in the public's knowledge or ability to comprehend science. 

Instead, it's caused by antagonistic cultural meanings that become attached to particular risks (and related facts), converting them into badges of membership in and loyalty to important affinity groups.

When that happens, the stake individuals have in maintaining their standing in their group will tend to dominate the stake they have in forming "accurate" understandings of the scientific evidence: mistakes on the latter won't increase their or anyone else's level of risk (ordinary individual's opinions are not of sufficient consequence to incrase or diminish the effects of climate change, etc); whereas being out of line with one's group can have huge, and hugely negative, consequences for people socially.

Ordinary individuals will thus attend to information about the risks in question (including, e.g., the position of "expert" scientists) in patterns that enable them to persist in the holding beliefs congruent with their cultural identities.  Individuals who enjoy a higher than average capacity to understand such information won't be immune to this effect; on the contrary, they will use their higher levels of knowledge and analytic skills to ferret out identity-supportive bits of information and defend them from attack, and thus form perceptions of risk that are even more reliably aligned with those that are characteristic of their groups.

That was the argument we made about climate change and science comprehension in our Nature Nanotechnology study.  And I think it generalizes to other culturally contested risks.

But not all socieal risks are contested. The number that are characterized by culturally antagonistic meaning is, as I've stressed before, quite small in relation to the number that generate intense cleavages of the sort that characterize climate change, nuclear power, gun control, the HPV vaccine, and (apparently now) fracking.

With respect to those issues, we shouldn't expect to see polarization generally. Nor should we expect to see it among those culturally diverse individuals who are highest in science literacy or in other qualities that reflect a higher capacity to comprehend quantitative information.

On the contrary, we should expect such individuals to be even more likely to be converging on the best scientific evidence.  They might be better able to understand such evidence themselves than people whose comprehension of science is more modest. 

But more realistically, I'd say, the reason to expect more convergence among the most science literate, most numerate, and most cognitively reflective citizens is that they are more reliably able to discern who knows what about what. 

The amount of decision-relevant science that it is valuable for citizens to make use of in their lives far exceeds the amount that they could hope to form a meaningful understanding of. Their ability to make use of such information, then, depends on the ability of people to recognize who knows what about what (even scientists need to be able to employ this form of perception and recognition for them to engage in collaborative production of knowledge within their fields).

Ordinary individuals--ones without advanced degrees in science etc. -- are ordinarily able to recognize who knows what about what without difficulty, but one would expect that those who have a refined capacity to comprehend scientific information would likely do even better.

It's the degrading or disrupting effect on this recognition capacity on citizens of ordinary and extraordinary science comprehension capacities that makes risks suffused with antagonistic meanings a source of persistent cultural dispute.

Okay, all of that is a matter of surmise and conjecture.  How about some data on the impact of science literacy on less polarizing issues.

I have to admit that I'm not as systematic as I should be -- as I think it is important for all who are studying the "science communication problem" to be -- in studying "ordinary," "boring," nonpolarizing risks.  

But consider this:

Here we see the impact of science literacy, generally and with respect to the cutural groiups (this time egalitarian communitarians and hierarch individualists) who are most "divided," on GM foods and childhood vaccination.

In fact, the division is exceedingly modest.  I think, in fact, to characterize the levels of disagreement seen here as reflecting "cultural polarization" would be extravagant.  As I've emphasized before, I see little evidence -- as opposed to casual assertions by commentators who I think should be more careful not to confuse agitation among subsegments of the population who are disposed to dramatic, noisy gestures but who are actually very small and quite remote from the attention of the ordinary, nonpolitical member of the public--that these are culturally polarizing issues in the U.S., at least for the time being.

Moreover,with respect to both issues, science literacy tends in general and among the cultural groups whose members are modestly divided to reduce concern about risk (again, a little "blip" like the one at the extreme science-literacy end of "egalitarian communitarians" in the fracking graph is almost certainly just noise-- statistically speaking; if we could find the one or two responsible survey respondents, they might in fact be unrepresentatively noisy on this issue).

That's not "smoking gun" evidence that science literacy tends to improve the public's use of decision-relevant science on societal risks for nonpolarizing issues.

For that, it would be useful to have more evidence of public opinion, on risks that provoke even less division and on which the evidence is very very clear (it is on vaccines; I am inclined, too, to believe that the evidence on GM foods suggests they pose exceedingly little risk and in fact offset myriad others, from ones associated with malnutrition to crop failure induced by climate-- but I feel I know less here than I do about vaccines and am less confident).

But the "picture" of how science literacy influences public opinion vaccines and GM foods-- two risk issues that aren't genuinely culturally polarizing -- is strikingly different from the one we see when we look at issues like climate change, or nuclear power, or fracking, where the toxic fog of antagonistic meanings clearly does impede ordinary citizens' ability to see who knows what about what.

Science comprehension -- knowledge of important scientific information but even more important the habits of mind that make it possible to know things in the way science knows them -- is intrinsically valuable. Even if this capacity in citizens didn't make them better consumers of decision-relevant science, a good society would dedicate itself to propagating it as widely as possible in its citizens because in fact the ability to think is a primary human good.

But who could possibly doubt that science comprehension -- the greatest amount of it, dispersed as widely as possible among the populace -- wouldn't make it more likely that the value of decision-relevant science would be realized by ordinary people in their lives as individuals and as citizens of a democracy?  I certainly wouldn't question that!

The polarizing effect of science literacy on culturally contested issues like climate change is not evidence that popular science comprehension lacks value.

On the contrary, it is merely additional evidence of how damaging a polluted science-communication environment is for the welfare of the diverse citizenry of the Liberal Republic of Science.

Tuesday
Jun112013

Coin toss reveals that 56% (+/- 3%, 0.95 LC) of quarters support NSA's "metadata" monitoring policy! Or why it is absurd to assign significance to survey findings that "x% of American public" thinks y about policy z

Pew Research Center, which in my mind is the best outfit that regularly performs US public opinion surveys (the GSS & NES are the best longitudinal data sets for scholarly research; that's a different matter), issued a super topical report finding that a "majority" -- 56% -- of the U.S. general public deems it "acceptable" (41% "unacceptable") for the "NSA [to be] getting secret court orders to track calls of millions of Americans to investigate terrorism."

Polls like this -- ones that purport to characterize what the public "thinks" about one or another hotly debated national policy issue -- are done all the time.  

It's my impression -- from observing how the surveys are covered in the media and blogosphere-- that people who closely follow public affairs regard these polls as filled with meaning (people who don't closely follow public affairs are unlikely to notice the polls or express views about them).  These highly engaged people infer that such surveys indicate how people all around them are reacting to significant and controversial policy issues. They think that the public sentiment that such surveys purport to measure is itself likely to be of consequence in shaping the positions that political actors in a democracy take on such policies.

Those understandings of what such polls mean strike me as naive.

The vast majority of the people being polled (assuming they are indeed representative of the US population; in Pew's case, I'm sure they are, but that clearly isn't so for a variety of other polling operations, particularly ones that use unstratified samples recruited in haphazard ways; consider studies based on Mechanical Turk workers, e.g.) have never heard of the policy in question. Never given them a moment's thought.  Their answers are pretty much random -- or at best a noisy indicator of partisan affiliation, if they are able to grasp what the partisan significance of the issue is (most people aren't very partisan and can't reliably grasp the partisan significance of issues that aren't high-profile, perennial ones, like gun control or climate change).

There's a vast literature on this in political science. That literature consistently shows that the vast majority of the U.S. public has precious little knowledge of even the most basic political matters. (Pew -- which usually doesn't do tabloid-style "issue du jour" polling but rather really interesting studies of what the public knows about what -- regularly issues surveys that measure public knowledge of politics too.)

To illustrate, here's something from the survey I featured in yesterday's post.  The survey was performed on a nationally representative on-line sample, assembled by YouGov with recruitment and stratification methods that have been validated in a variety of ways and generate results that Nate Silver gives 2 (+/- 0.07)  thumbs up to.

In the survey, I measured the "political knowledge" of the subjects, using a battery of questions that political scientists typically use to assess how civically engaged & aware people are.

One of the items asks:

How long is the term of office for a United States Senator? Is it

(a) two years

(b) four years

(c) five years or

(d) six years?

 Here are the results:

Got that? Only about 50% of the U.S. population says "6 yrs" is the term of a U.S. Senator (a result very much in keeping with what surveys asking this question generally report).

How should we feel about half the population not knowing the answer to this question?

Well, before you answer, realize that less than 50% actually know the answer.

If the survey respondents here had been blindly guessing, 25% would have said 6 yrs.  So we can be confident the proportion who picked 6 yrs because they knew that was the right answer was less than 50% (how much less? I'm sure there's a mathematically tractable way to form a reasonable estimate -- anyone want to tell us what it is and what figure applying it yields here?).

And now just answer this question: Why on earth would anyone think that even a tiny fraction of a sample less than half of whose members know something as basic as how long the term of a U.S. Senator is (and only 1/3 of whom can name their congressional Representative, and only 1/4 of whom can name both of their Senators...) has ever heard of the "NSA's phone tracking" policy before being asked about it by the pollster? 

Or to put it another way: when advised that "x% of the American public believes y about policy z," why should we think we are learning anything more informative than what a pollster discovered from the opinion-survey equivalent of tossing thousands and thousands of coins in the air and carefully recording which sides they landed on?

Monday
Jun102013

What are fearless white hierarchical individualist males afraid of? Lots of stuff!

I haven't posted any data recently. And I haven't explored/exploded the "white male effect" (WME) in risk perception in a while either.  So lets pack some new data around WME & blow her to smithereens!

Actually, the "white male effect" is one of the most important phenomena -- one of the coolest findings ever -- in the study of public risk perceptions.

WME refers to the tendency of white males to express less concern with (seemingly) all manner of risk than do minorities and women. The finding was first observed by Flynn, Slovic & Mertz (1994) and thereafter systematically charted by Finucane, Slovic, Mertz, Flynn, & Satterfield (2000).

Lots of scholars have looked at it since, trying to figure out what explains it.  Does it reflect some sort of "hard wired" or "genetic" disposition on the part of women to be more concerned about the welfare of others (obvious question: if so, why are minority males more concerned?) Are men evolutionarily programmed to be more "risk seeking" (same obvious question.) Are white males less concerned because they are politically less vulnerable themselves than minorities and women? Or maybe white males are just "getting it right" -- because they are more educated, less vulnerable to cognitive biases?

None of the above is probably the best answer. 

What makes those explanations weak is that there really isn't a "white male effect."  Rather there's a white male hierarch individualist effect

In a study in which I collaborated with Slovic, Braman, Gastil & Mertz (2007), we used the cultural cognition worldview scales as a magnifier to inspect more closely cultural influences observed in Finucane et al. (2000).

What we found, in effect, was that white hierarchical and individualistic males are so extremely skeptical of risks involving, say, the environment or (another thing we looked at) guns that they create the appearance of a  sample-wide "white male" effect.  That effect "disappears" once the extreme skepticism of these individuals (less than 1/6 of the population) is taken into account.  There isn't any WME among individuals who are egalitarian and communitarian, hierarchical communitarian or (in the case of environmental risks) egalitarian and individualistic in their outlooks.

This finding fit the hypothesis that "identity protective cognition" was driving WME.  Identity protective cognition is a form of motivated reasoning.  It describes the tendency of people to fit their perceptions of risk (and related facts) to ones that reflect and reinforce their connection to important affinity groups, membership in which confers psychic, emotional, and material benefits.  The study of cultural cognition reflects the premise that the latent group affinities measured with the "cultural worldview scales" we employ in our studies are the ones motivating risk perceptions in conflicts that polarize the U.S. public.

The sorts of things white hierarchical individualistic males are "unafraid of" are activities essential to the the cultural roles they tend to occupy.  Among people who subscribe to that outlook, men attain status by occupying positions of authority in commerce and industry.  Gun possession plays an important role for men in such groups too--enabling hierarchical roles like father, protector, and provider and symbolizing individualistic (male) virtues like honor and courage and self-reliance.

Because the assertion that such activities are "dangerous" would justify restriction of them by the state -- and invite resentment and stigmatization of those individuals conspicuously identified with them -- hierarchical and individualistic white males have an especially powerful psychological incentive to resist such claims.

That was our conjecture-- one founded generally on Mary Douglas's and Aaron Wildavksy's "cultural theory of risk" -- and the evidence was more consistent with that than with other explanations, we suggested.  Other researchers have corroborated this hypothesis with related but distinct methods (that's a good thing; being able to verify a hypothesis with multiple methods furnishes assurance that the effect is really "there" and not an artifact of a particular way of trying to test for it).

But here's another thing-- or some more evidence, really.  If identity-protective cognition is at work, there's no reason to believe that white hierarchical individualist males will be uniformly more "risk dismissive" than other people.  

They'll be that way only with regard to private activities the regulation which poses a threat to activities essential to their cultural status.  Where regulation itself poses such a threat, they should worry about the risks that such regulation poses.  Moreover, if we can find private activities that threaten their cultural identities, their stake in securing regulation of them should motivate them to be risk sensitive in regard to those activities!

And we see exactly that! I'll show you in brand new data, collected in April and May of this year.

But first let's use these fresh data (mmmm mmmm--don't you love the aroma of freshly regressed data?!) to observe the "classic" white male effect.

This figure illustrates the "effect" with regard to climate change:



Using the "industrial strength risk perception measure," we can see that white males are a lot less worried about climate change than "everyone else."

But consider this figure:

Click on me! Or I'll turn you into a white male hierarch individualist!This graphic, which uses a Monte Carlo simulation to illustrate the results of a multivariate regression analysis, shows that the "white male effect" is being driven by the extreme climate change skepticism of of white hierarchical individualistic males (who are, again, about 1/6 of the population).  There's no meaningful gender or race variance in the rest of the subjects in this nationally representative sample.

Now consider a larger collection of risks:


Holy smokes!

These are the mean scores for white male hierarchical individualists and "everyone else" on a range of risks, the perceptions of which are all measured with the "industrial strength" measure.

What do we see?  Lots of cool things!

For one, we see that those "fearless" white hierarchical individualistic males aren't so brave after all.  Sure climate change doesn't scare them, but the potential impact of restrictions on handguns on the "health, safety, and prosperity" of members of our society sends chills up their spine.

Environmental and government regulations are, of course, scary to them too. Those can wreck the economy. Ask any hierarchical individualistic white male for evidence & he'll have no trouble supplying it -- just look at the financial collapse of 2008.

And let's hope that Obama -- who in the eyes of a hierarchical white male individualist likely can't be counted on to do much of anything good -- will hold firm on marijuana criminalization.  Most people don't think so, but the white male hierarchical individualist knows that the dangers to society from decriminalization would be devastating. 

And what do you know: guns certainly aren't dangerous ("people kill people" etc); but privately owned drones-- yow! Terrifying! (Mystery -- who is disgusted, and why, by drones -- half-solved.)

Hey there are some other cool things here too, don't you think?  Look at childhood vaccines. No one -- not white hierarchical individualistic males nor everyone else -- is concerned.  A surprise only to those who believe what they read in the papers, where the ravings of a small sect regularly transmute into a "growing crisis of public confidence" in vaccines. (To anticipate comments: Yes, the small sect is an unreasoning, noxious health menace and should be opposed; but no, that doesn't mean that it's a sensible risk-communication strategy  to miselad the public about the facts, which show no slippage in the last decade in childhood vaccination rates from their historic levels of well over 90%, and no meaningful increase in the "exemption" rate, which has remained < 1%.)

And here's something I wasn't expecting at all: Look at genetically modified foods.  No cultural dissensus--that's not new. But the apparent consensus that GM foods are risky-- more certainly, than global warming, and more too than anything except terrorism -- that's a change relative to what I've observed in various surveys like this that I've done over the yrs.  

Is that evidence that the effort to protect the science communication environment from being polluted on this issue is failing? Could be; although I still think that the most important thing is to avoid cultural polarization, since that's the form of pollution, I'm convinced, most toxic to the reasoning faculty that ordinary members of the public-- of all cultural outlooks -- use to discern what's known to science.

Okay-- that was fun, wasn't it?

And don't forget about the wildly popular Cultural Cognition Site game show "WSMD?, JA!"  Been a long time since we played that!

References

Douglas, M. & Wildavsky, A.B. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. (University of California Press, Berkeley; 1982).

 Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Flynn, J., Slovic, P. & Mertz, C.K. Gender, Race, and Perception of Environmental Health Risk.Risk Analysis 14, 1101-1108 (1994).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, doi:   (2012).

McCright, A.M. & Dunlap, R.E. Cool dudes: The denial of climate change among conservative white males in the United States. Global Environmental Change 21, 1163-1172 (2011).

Nelson, Julie.  Are Women Really More Risk-Averse than Men?, INET Researcn Note (Sept., 2012)

Nelson, Julie.  Is Dismissing the Precautionary Principle the Manly Thing to Do? Gender and the Economics of Climate Change, INET Research Note (Sept. 2012)

Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Friday
Jun072013

Five theses on science communication: the public and decision-relevant science, part 2

This is the second part of a two-part series that recaps a talk I gave at a meeting of the National Academy of Science's really cool Public Interfaces of the Life Sciences Initiative.

The subject of the talk (slides here) was the public's understanding of what I called "decision relevant science" (DRS)--meaning science that's relevant to the decisions that ordinary members of the public make in the course of their everyday lives as consumers, as parents, as citizens, and the like.

Part 1 recounted a portion of the talk that I invited the audience to imagine came from a reality tv show called "Public comprehension of science--believe it or not!," a program, I said, dedicated to exploring oddities surrounding what the public knows about what's known to science.  The concluding portion of the talk, which I'll reconstruct now, presented five serious points --or points that I at least intend to be serious and be taken seriously--about DRS, each of which in fact could be supported by one of the three "strange but true" stories featured in the just-concluded episode of "Public comprehension of science--believe it or not!"

I. Individuals must accept as known more DRS than they can ever possibly understand

In the first story featured in the show, we learned that individuals belonging to that half of the US population that purports to "believe" in evolution are not more more likely to be able to give a cogent account of the "modern synthesis" (natural selection, genetic variance, and random mutation) than those belonging to the half that asserts "disbelief."  In fact, very small proportions of either group can give such an account.  

Thus, most of the people who quite properly accept evolution as "scientific fact" (including, I'm confident, the vast majority who view those who disbelive in it as pitifully ignorant) believe in something they don't understand.

That's actually not a problem, though.  Indeed, it's a necessity!

The number of things known to science that it makes sense for a practical person to accept as true (that a GPS systems, exquisitely calibrated in line with Einstein's theory of special relativity, will reliably guide him to where he wants to go, for example) far exceed what such an individual could ever hope to comprehend in any meaningful way on his own. Life is too short.

Indeed, it will be a good deal shorter if before accepting that it makes sense not to smoke such a person insists on verifying for himself that smoking causes cancer -- or that before taking antibiotics that they do in fact kill disease-causing bacteria but do not -- as 50% of the U.S. population thinks-- "believe it or not!"--kill viruses.

II. Individuals acquire the insights of DRS by reliably recognizing who has it.

Yet it's okay, really, for a practical, intelligent person not to acquire the knowledge that antibiotics kill only bacteria and not viruses. He doesn't have to have an MD to get the benefits of what's known to medical science.  He only has to know that if he gets sick, the person he should consult and whose advice he should follow is the doctor.  She's the one who knows what science knows there.

That's how, in general, individuals get the benefit of DRS--not by understanding it themselves but by reliably recognizing who knows what about what because they know it in the way that science counts as knowing.  

Why not go to a faith healer or a shaman when one has a sore throat -- or a cancerous legion or persistent hacking cough? Actually, some very tiny fraction of the population does. But that underscores only that there really are in fact people out there whose "knowledge" on matters of consequence to ordinary people's lives are not ones that science would recognize and that precious few people (in a modern liberal market society) treat them as reliable sources of knowledge.

Ordinary people reliably make use of all manner of DRS -- medical science is only one of many kinds -- not because they are experts on all the matters to which DRS speaks but because they are themselves experts at discerning who knows what's known to science.

III.  Public conflict over DRS is a recognition problem, not a comprehension problem.

Yet ordinary members of the public do disagree--often quite spectacularly--about certain elements of DRS. These conflicts are not a consequence of defects in public comprehension of science, however. They are a product of the the failure of ordinary members of the public to converge in the exercise of their normal and normally reliable expert ability to recognize who knows what about what.

Believe it or not, one can work out this conclusion logically on the basis of information related in the "Public Comprehension of Science--Believe it or Not!" show.  

Members of the public, we learned, are (1) divided on climate science and (2) don't understand it (indeed, the ones who "believe" in it, like the ones who believe in evolution, generally don't have a meaningful understanding of what they believe).

But (2) doesn't cause (1).  If it did, we'd expect members of the public to be divided on zillions of additional forms of DRS on which they in fact are not.  Like the efficacy of antibiotics, which half the population believes (mistakenly) kill viruses.  

Or pasteurized milk.  No genuine cultural conflict over that, at least in the US.  And the reason isn't that people have a better grasp of biology than they do of climate science. Rather it's that there, as with the health benefits of antibiotics, they are reaching the same conclusion when they exercise their rational capacity to recognize who knows what science knows on this matter.  

Indeed, those of you who are leaping out your seats with excitement to point out the freaky outlier enclaves in which there is a dispute about pasteurization of milk in the US, save yourself the effort! What makes the spectacle of such conflicts newsworthy is precisely that the advocates of the health benefits of "raw milk" are people whom the media knows the vast run of ordinary people (the news media consumers) will regard as fascinatingly weird.

Because people acquire the insights of DRS by reliably recognizing who knows what science knows, conflicts over DRS must be ones in which they disagree about what those who know what science knows know.

This conclusion has been empirically verified time and again.  

On matters like the risks of climate change, the safety of nuclear power waste disposal, the effects of gun control on crime, and the efficacy and side effects of the HPV vaccine, no one (or no one of consequence, if we are trying to understand public conflict rather as opposed to circus sideshows) is saying "screw the scientists--who cares what they think!"

Rather, everyone is arguing about what "expert scientists" really believe. Using their normal and normally reliable rational powers of recognition, those on both sides are concluding that the view that their side accepts is the one consistent with "scientific consensus."

What distinguishes the small number issues on which we see cultural polarization over DRS from the vast number of ones in which we don't has nothing to do with how much science the public comprehends. Rather, it has everything to do with the peculiar tendency of the former to evade the common capacity enjoyed by culturally diverse citizens to recognize who knows what it is known to science.

IV. The recognition problem reflects a polluted science communication environment.

A feature that these peculiar, recognition-defying issues share is their entanglement in antagonistic cultural meanings. 

For the most part, ordinary people exercise their capacity to recognize who knows what about what by consulting other people "like them."  They are better able to "read" people who share their particular outlooks on life; they enjoy interacting with them more than interacting with people who subscribe to significantly different understandings of the best way to live, and are less likely to get into squabbles with them as they exchange information. "Cultural communities" -- networks of people connected by intense, emotional and like affinities -- are the natural environment, then, for the exercise of ordinary citizen's rational recognition capacity.

Ordinarily, too, these communities, while plural and diverse, point their respective members in the same direction.  Any such community that consistently misled its members about DRS wouldn't last long given how critical DRS is to the flourishing -- indeed, simple survival -- of their members.

But every now and again, for reasons that are not a complete mystery but that are still far from adequately understood, some fact -- like whether the earth is heating up -- comes to be understood as a kind of marker of cultural identity.  

The position one holds on a fact like that will then be experienced by people -- and seen by others (the two are related, of course) -- as a badge of membership in, and loyalty to, one or another cultural group.

At that point, reasonable people become unreasonably resistant to changing their minds--and for reasons that, in a sad and tragic sense, are perfectly rational.  

The stake they have in maintaining group-convergent beliefs will usually be much bigger than any they might have in being "right." Making a "mistake" on the science of climate change, e.g., doesn't affect the risk that any ordinary member of the public or any person or any other thing she cares about faces: she just doesn't matter enough as a a consumer, a voter, a public deliberator etc. to make a difference.  But if she forms a view that is out of line on it from the point of view of those who share her cultural allegiances, then she is likely to suffer tremendous costs--psychic, emotional, and material--given the function that positions on climate change perform in identifying to members of such groups who belongs to it and can be trusted.

These antagonistic meanings, then, can be viewed as a form of pollution in the science communication environment.  They enfeeble the reliable operation of the normally reliable faculties of recognition that ordinarily members of the public use to discern DRS.

People overwhelmingly accept that doctors and public health officials are the authorities to turn to to have access to the health benefits of what's known to science, and ordinarily have little difficulty in discerning what those experts believe and are counseling them to do.  But when facts relating to medical treatments become suffused with culturally antagonistic meanings, ordinary members of the public are not able to figure out what such experts actually know.

The US public isn't divided over the risks and benefits of mandatory vaccination of children for Hepatitis B, a sexually transmitted disease that causes a deadly form of cancer.  Consistent with the recommendation of the CDC and pediatricians, well over 90% of children get the HBV vaccination every year.

Americans are culturally divided, however, over whether children should get the HPV vaccine, which likewise confers immunity to a sexually transmitted disease (the human papillomavirus) that causes a deadly form of cancer. For reasons having to do with the ill-advised process by which it as introduced into the US, the HPV vaccine became suffused with antagonistic cultural meanings--ones relating to gender norms, sexuality, religion, and parental sovereignty.

Parents who want to follow the advise of public health experts can't discern what their position is on the HPV vaccine, even though it is exactly he same as it is on the HBV vaccine.  Experimental studies have confirmed that their exposure to the antagonistic meanings surrounding the former make them unable to form confident judgments about what experts believe about the risks and benefits of the HPV vaccine, even though CDC and pediatricians support it to the same extent as they do the  HBV vaccine and for the same reasons.  

The antagonistic cultural meanings that suffuse issues like climate change and the HPV vaccine confront ordinary people with an extraordinary conflict between knowing what's known to science and being who they are. This toxic environment poses a singular threat to their capacity to make use of DRS to live happy and healthy lives. 

V. Protecting the science communication environment from contamination is a critical aim of the science of science communication.

Repelling that threat demands the development of a systematic societal capacity to protect the science communication environment form the pollution of antagonistic cultural meanings.

Technologies for abating the dangers human beings face are not born with antagonistic cultural meanings.  They acquire them through historical contingencies of myriad forms. Strategic behavior plays a role; but sheer accident and misadventure also contribute.

Understanding the dynamics that govern this pathology is a central aim of the science of science communication.  We can learn how to anticipate and avoid them in connection with emerging forms of practical science, such as nanotechnology and synthetic biology. And we can perfect techniques for removing antagonistic meanings in the remaining instances in which intelligent, self-conscious protective action fails to prevent their release into the science communication environment.

The capacity to reliably recognize what is collectively known is not some form of substitute for attainment of scientific knowledge.  It is in fact a condition of it within the practice of science and outside of it.

In discerning DRS, the public is in fact exercising the most elemental form of human rationality.

Securing the political and social conditions in which that faculty can reliably function is the most important aim of the science of science communication. 

Tuesday
Jun042013

"Public comprehension of science--believe it or not!": the public and decision-relevant science, part 1 

Gave talk yesterday at a meeting of the Public Interfaces of the Life Sciences Iniative of the National Academy of Sciences.  The aim of the Initiative is to identify various avenues—in education, in political life, and in civil society—for enlarging the role that the life sciences play in everyday life. 

The Initiative is typical of the leadership role the NAS has fittingly assumed in integrating the practice of science with the scientific study of how ordinary citizens come to know what is known by science—a commitment on the Academy’s part that was highlighted in its Science of Science Communication Sackler colloquium in the Spring of 2012.

My talk was on how the pubic thinks about decision-relevant science. This is part 1 of 2. But slides for whole thing here

As is well-known to readers of this blog, I believe that doing and communicating science are very different things, even when the sort of science being done is the science of science communication.  Indeed, I believe the “science communication problem”—the persistent failure of the availability of valid science to quiet public controversy over risks and other policy-relevant facts to which that science speaks in a compelling way—is a consequence of our society's failure to devise practices and construct institutions that recognize fully the significance of the communicating-doing distinction.

To effectively communicate this point, I thought I would demonstrate what strikes me—as someone who only who does the science of science communication—as a clever way to communicate what I know to the public.

I told my audience that I would present the first part of my remarks in the style of a “reality tv” program or the like entitled, “Public comprehension of science—believe it or not!,” a show dedicated to sharing with viewers instances of the myriad “ ‘strange but true’ characteristics of the public’s knowledge of what science knows.”

This week’s episode (I told them) would feature three stories:

1.  Evolution: “believing,” “disbelieving” & understanding

About half of the general public in the U.S. does not “believe” that humans “evolved” from other animal species. They “believe” instead that humans were created, as is, by God.

This not surprising news to regular viewers of this program—or likely to anyone else. We are reminded of this fact at least once a year by Gallup, which has been polling Americans about their “belief” in evolution—and reporting more or less the same result—for many many years.

The “strange but true” thing is this: the half of the U.S. population that does “believe” in evolution is no more likely than the half that doesn’t to be able to be pass a high school biology test on the rudiments of how evolution works.

There is, researchers have found again and again, no correlation between whether someone says they “believe” in evolution and their understanding of the concepts of “natural selection,” “genetic variance,” and “random mutation”—the basic elements of the dominant, “modern synthesis” position in the science of evolution.

In fact, distressingly few of either the believers or disbelievers have an accurate comprehension of these dynamics.

And there’s another curious thing about “belief” & “disbelief” in evolution.

It’s definitely possible to teach people the basic elements of the modern synthesis, which are remarkably and elegantly simple. The evidence that supports them is reasonably straightforward too.

But imparting such understanding also has zero effect on the likelihood that those who then demonstrate basic comprehension of evolution say they “believe” in it! 

Researchers have demonstrated this multiple times, too, with both high school and college students.

Strange but true!

2.  Climate change risk perceptions: “fast” & “slow”

This week’s second story involves public comprehension of climate science.

The U.S. public doesn’t get it.

This was the conclusion of a very impressive 1992 study, which found that those members of the public who believed climate change was occurring tended to attribute it to holes in the ozone layer and other irrelevant phenomena.

When researchers re-did the study in 2009, the public was still woefully ignorant of elementary climate science. They found, of course, that a great many members of the public didn’t accept that global temperatures were increasing as a result of human CO2 emissions.

But even among the segment of the public who said they did accept this, the researchers found myriad, remarkable misunderstandings, including the belief that aerosol spray cans were one source of the problem and that cleaning up toxic waste sites would help to ameliorate it.

And here’s another thing.

The public tends to over-rely on cognitive heuristics in forming perceptions of risk. This is the theme, of course, of Daniel Kahneman’s Nobel Prize winning work, and his excellent book Thinking, Fast and Slow.  

Various commentators who draw on Kahneman’s work (but interestingly not Kahneman himself, to my knowledge) assert that “bounded rationality” of the sort documented in this work explains why members of the general public don’t universally share climate scientist’s concern about the dangers that climate change poses to human wellbeing.

But social science evidence has established that those members of the public who are the most science literate, and who score highest in measures of the disposition to use reflective modes of reasoning (the “slow” kind, in Kahneman’s typology) are in fact the most culturally polarized on climate change risks!

As members of the public become more science literate, more numerate, and the like, they don’t converge on what climate scientists know.  They just become more reliable “indicators” of what people who hold particular cultural values believe.

Believe it or not . . . .

3.  Antibiotics: consensus, scientific & public

The last story for this week concerns antibiotics.

There is really no meaningful public controversy—cultural or otherwise—over whether someone who is not feeling well should seek medical treatment, and should take antibiotics if his or her physician prescribes them. 

But 50% of the U.S. public believes that antibiotics kill viruses and not just bacteria.

This is a consistent finding in studies that administer the NSF’s “Science Indicators,” the standard “science literacy test” used to measure what members of the public know about basic science—not just in the U.S. but globally.

Now in fact, the question is a “true-false” one, and so one might conclude that members of the U.S. public are doing no better than chance in their responses here.

But interestingly, U.S. respondents score consistently higher than members of the public from other countries, including Japan, Russian, South Korea, and the EU nations.  So really, we “know more” science than they do here.

Indeed, members of the public in the US tend to score higher on lots of items on the NSF science literacy test.  It really is tempting to say that the US is more science literate than the rest of the world!

Except that members of the rest of the world do so much better than we do on the NSF indicator item that asks whether humans evolved from other animals . . . .

But you know what that actually signifies? That the NSF item on “evolution” isn’t measuring the same thing as the rest of the test.  Those who consistently get 90+% of the questions are only slightly more likely than 50% likely to correctly answer the evolution question.

Actually, that shouldn’t surprise you at this point: it follows, almost logically, from the first story in this show, which related that there is really no relationship between saying one “believes” evolution and having and being able to form an accurate scientific understanding of evolutionary theory.

Social scientists have demonstrated that the “evolution” question is actually not measuring the same “science comprehension” quality in people who take the NSF science literacy test as the other items.  It is measuring their religiosity.

Yet proposals to exclude the evolution question from measures of “science literacy”  in studies that correlate science literacy with other attitudes tend to provoke significant controversy.  Critics say the item should be included even though it indisputably reduces the precision of the science literacy score as a measure of a latent science comprehension aptitude or disposition.

Sad but true. . . .

Next time: Five theses on public understanding and decision-relevant science, each of which can be illustrated using the three stories from this week’s episode of “Public Comprehension of Science—Believe it or Not!”

Not to give anything away, but if you think that what I’ve told you so far means (or even means that I think) the public is irrational, you are very wrong.

Wrong about what it means, and wrong about what public rationality and its relationship to decision-relevant science consist in. 

Part 2.

Thursday
May302013

Polarization on policy-relevant science is not the norm (the "silent denominator" problem)

Ever hear of the Formaldehyde Emissions from Composite Wood Products Act of 2010?

Didn't think so. 

As the Environmental Proection Agency explains, the Act (signed into law by President Obama on July 7, 2010, after being passed, obviously, by both Houses of Congress)

establishes limits for formaldehyde emissions from composite wood products: hardwood plywood, medium-density fiberboard, and particleboard. The national emission standards in the Act mirror standards previously established by the California Air Resources Board for products sold, offered for sale, supplied, used or manufactured for sale in California.

The legislation directs the EPA to promulgate implementing regulations relating to "labeling," "chain of custody requirements," "ultra low-emitting formaldehyde resins," "exceptions ... for products ... containing de minimis amounts of composite wood," etc.  The agency just issued proposed rules for notice & comment yesterday!

Why am I telling you about this?  Well, first of all, because I know you've never heard of this regulatory scheme (if you have, you are a freak and are proud of it, so the point I'm going to make still applies).

Because you haven't, the issue of formaldehyde regulation is absent from your mental inventory of risks managed through the application of scientific knowledge.

Because this law -- along with billions and billions (or at least 10^3's) of others informed by science -- is missing from your risk regulation inventory, there's a serious risk that you are overestimating the frequency with which risk issues provoke cultural polarization.

I'm sure some segment of the population somewhere is really freaked out by formaldehyde and another drinks a glass of it for breakfast everyday just to prove a point. But these citizens are really outliers; whatever group-based conflict there might be about formaldehyde is nothing like the ones over climate change, nuclear power, HPV, guns, etc.

Very very very few risk and other policy issues that turn on science provoke meaningful cultural conflict. The ratio of polarizing to nonpolarizing issues of that sort is miniscule.

That doesn't mean that those issues get regulated in an optimal manner.  But it means that one of the largest obstacles to rational engagement with science in policymaking is absent -- and that's an undeniably good thing for enlightened self-government.

The science-informed policy issues that don't provoke controversy are, of course, boring.  That's why most people don't know about them.

But if you do notice and give some thought to them, a couple of interesting and important things will occur to you.

First, insofar as the number of science-informed policy issues that could provoke cultural polarization is very small relative to the number that actually do, there must be something, and something strange, going on with the ones that actually do end up generating that sort of division.

It's critical to figure out how to fix a broken debate like the one over climate change.

But we should also be figuring out why this sort of weird pathology happens and how we can avoid it.

That's one of the objectives of the science of science communication. Indeed, it's probably the most important contribution this science can make to the welfare of democratic societies.

Second, if you notice all these boring, nonpolarized forms of science-informed risk regulation, you'll realize that the thing that makes some issues become polarized can't be lack of public knowledge about the science surrounding them.

It's true that members of the public don't know sicence much about climate change, nuclear power, the HPV vaccine, etc. But the public doesn't know anything more about the science relating to the vast range of issues that fail to generate polarization.  

Members of the public wouldn't score higher on a "formaldehyde science literacy" test than a climate science literacy test.

Formaldehyde scientists aren't better "science communicators" than climate scientists. 

That doesn't mean, either, that members of the public are necessarily uniformed.  

Obviously, members of the public couldn't possibly be expected to know and understand all the science that is relevant to protecting their health and wellbeing--whether that science informs regulations that protect them from exposure to toxic substances or medical procedures that protect them from diseases. 

But just as a reflective individual doesn't have to have an MD to participate in an informed and meaningful way in his or her receipt of high-quality medical care, so a  reflective citizen doesn't have to have a degree in toxicology or biology to know whether his or her government is making sensible decisions about how to protect the public generally from exposure to environmental toxins.  

In both cases, such a person only has to be able to make an informed judgment that the professionals he or she is relying on to use scientific knowledge know what they are doing and are using what they know to benefit him or her and others whose interests those agents are supposed to be promoting.

Reflective citizens do that all the time.  And one of the aims of science communication is to create and protect the conditions in which democratic citizens can reliably exercise this rational recognition capacity.

Those conditions are missing for climate change and other issues that culturally polarize the public.  In connection with those issues, citizens' rational recognition faculty is being impaired by toxins -- not ones emitted from "composite wood products" but ones being transmitted, either deliberately or by misadventure, by partisan discourse.

One goal of the science of science communication, then, is to protect the quality of the science communication environment from contamination by antagonistic cultural meanings that convert boring, mundane issues of fact that admit of scientific inquiry into divisive symbols of tribal loyalty.

To acquire and use the knowledge necessary to do that, researchers must avoid fixating only on pathological cases like climate change and ignoring the "silent denominator" (or silent members of the denominator) comprising all the science-informed policy issues that don't generate cultural polarization.

We can't expect to be able to accurately prevent and, failing that, diagnose and treat science-communication pathologies unless we start with an informed and psychologically realistic of what citizens know and how in a healthy body politic.

Hey--did you hear about the Chemical Safety Improvement Act that is garnering bipartisan support in the Senate?!  

I didn't think so.

Wednesday
May292013

The impact of "science consensus" surveys -- a graphic presentation

I am really really tired of this topic & am guessing everyone else is too. And for reasons stated in last couple of posts, I think a "market consensus" measure of belief in global warming would be a much more helpful way to measure and communicate the weight & practical importance of scientific evidence on climate change than any number of social science surveys of scientists or of scientific papers (I think we are up to 7 now).

But since I had occasion to construct this graphic to help a group of professional science communicators assess whether the failure to communicate scientific consensus can plausibly be viewed as the source of persistent cultural polarization over climate change in the US, I thought I'd post it.  I've included some "stills," but watch it in slide show mode if you want to get the nature of the empirical proof it embodies.

And here are the answers to the predictable questions:

1. Does that mean "scientific consensus" is irrelevant?

No.

People of all cultural outlooks support policies they believe are consistent with scientific consensus.

But they have to figure out what scientific consensus is, which means they have to assess any evidence that is presented to them on that.

In the current climate of polarization, members of opposing cultural groups predictably credit and discredit such evidence in patterns that reinforce their belief that the scientific consensus is in fact consistent with the position that predominates in their cultural group.

Until the atagonisitic cultural meanings that motivate this selective crediting and discrediting of evidence are dispelled, just flooding the information market with more and more studies of "scientific consensus" won't do any good.

Indeed, it will only amplify the signal of cultural contestation that sustains polarization. 

Meanings first, then facts.

2. Does this mean we should ignore people who are misinforming the public?

No.

But it means that just "correcting" misinformation won't work unless you convey affirming meanings.  

Indeed, in a state of polarized meanings, rapid-response "truth squads" also amplify polarization because they reliably convey the meaning "this is what your side believes -- and we think you are stupid!"

Meanings first, then facts!

3. Does this mean we should just give up?

No.

The only thing anyone should give up is a style of communicating "facts" or anything else that amplifies the message that positions on climate are part of an "us-them" cultural struggle.   

The reason the US and many other liberal democracies are polarized on climate change is not that people are science illiterate or over-rely on heuristic-driven reasoning processes. It isn't that they haven't been told that human CO2 emissions increase global temperatures.  It isn't that they are being exposed to biased news reports or misled by misinformation campaigns. And it certainly isn't that no one has advised them yet about the numerous studies finding "97% of scientists ..." agree that that human activity is causing climate change.

The reason is that we inhabit a science communication environment polluted with toxic partisan meanings on climate change.

Conveying to people -- a large segment of the population in the US & in other countries too-- that accepting evidence on climate change means accepting that members of their cultural community are stupid or corrupt is itself a form of science-communication pollution.  

If you don't think that many ways of communicating "facts" (including the extent of scientific consensus on climate change) convey that meaning, then you just aren't paying attention.

If you think there's no way to communicate facts that avoids conveying this meaning, and in fact affirms the identity of culturally diverse people, you aren't thinking hard enough.

Tuesday
May282013

Now, getting back to disgust: we've done guns & drones; what about *vaccines*?

In a temporary triumph over entropy, I happened upon this really interesting paper -- actually, it's a book chapter -- by philosopher Mark Navin:

Navin uses an interpretive, conjectural style of analysis, mining the expression of anti-vaccine themes in popular discourse.  

I think he is likely overestimating the extent of public concern about vaccines. As Seth Mnookin has chronicled, there is definitely an "anti-vaccine" subculture, and it is definitely a menace--particularly when adherents of it end up concentrated in local communities. But they are tiny, tiny minority of the population. Childhood vaccination rate have been 90-95% (depending on the vaccine), & exemption from vaccination under 1%, for many many years without any meaningful changes.

But I don't think this feature of the paper is particularly significant or casts doubt on Navin's extraction of the dominant moral/emotional themes that pervade anti-vaccine discourse.  Disgust--toward puncturing of the body with needles and the introduction of foreign agents into the blood; toward the aspiration to substitute fabricated and self-consciously managed processes for the ones that "nature" has created for governing human health (including nurturing and protection by mothers)--unmistakably animates the sentiments of the vaccine opponents, historical and contemporary, whom Navin surveys.

There are two cool links between Navin's account & the themes explored in my previous posts.  One is the degree to which the evaluative orientation in these disgust sensibilities cannot be reduced in a satisfactory way to a "conservative" ideology or "moral" outlook.

Navin cites some popular works that suggest that anti-vaccine sentiment is correlated with a "left wing" or "liberal" political view. I've never seen any good evidence of this & the idea that something as peculiar -- as boutiquey -- as being anti-vaccine correlates w/ any widespread cultural style strikes me as implausible. But it is clear enough from Navin's account that the distinctive melange of evaluative themes that inform "disgust" with vaccines are not the sorts of things we'd expect to come out of the mouth of a typical political conservative (or typical anything, really).

This feature of the analysis is in tension with the now-popular claim in moral psychology-- associated most conspicuously with Jonathan Haidt and to a lesser degree with Martha Nussbaum -- that "disgust" is a peculiarly or at least disproportionately "conservative" moral sentiment as opposed to a "liberal" one  (frankly, I think it is odd to classify people in these ways, given how manifestly non-ideological the average member of the public is!). That was a point I was stressing in my account of the role of disgust in aversion to guns (and maybe drones, too!).

The second interesting element of Navin's account is the relationship between disgust and perceptions of harm.  Navin notes that in fact those disgusted by vaccines inevitably do put primary emphasis on the argument that vaccines are inimical to human health.  They rely on "evidence" to make out their claim. But almost certainly what makes them see harm in vaccines -- what guides them selectively to credit and discredit evidence that vaccines poison humans and weaken rather than bolster immunity -- is their disgust with the cultural meaning of vaccines.

This point, too, I think is in tension with the contemporary moral psychology view that sees "liberals" as concerned with "harm" as opposed to "purity," "sanctity" etc.  

The alternative position -- the one I argued for in my previous posts -- is that the moral sensibilities of "liberals" are guided by disgust every bit as just as much those of "conservatives," who are every bit as much as focused, consciously speaking, on "harm" as liberals are.  Both see harm in what disgusts them -- and then seek regulation of such behavior or such activities as a form of harm  prevention.  What distinguishes "liberals" and "conservatives" is only what they find disgusting, a matter that reflects their adherence to opposing cultural norms.

Although the people Navin are describing aren't really either "liberals" or "conservatives" -- and in fact don't subscribe to cultural norms that are very widespread at all in contemporary American society -- his account supports the claim that disgust is in fact a universal moral sentiment, and one that universally informs perceptions of harm.

In this respect, he is aligned with William Miller and Mary Douglas, both of whom he draws on.

Cool paper -- or book chapter!  Indeed, I'm eager to find & read the rest of the manuscript.

Sunday
May262013

Money talks, & without the bias of cultural cognition: so why not listen?

Logic of prediction markets explained by professional science communicatorsGreat ongoing conversation following last post, on how market behavior furnishes alternatives to social science surveys of scientist opinion or scientific literature on weight & practical importance of science relating to climate change.  Urge others to join in, & those participating to continue.

Basically the point is this: 

1. A reflective person could understandably be uncertain how to assess the weight of scientific evidence on climate change and its practical impact (indeed, anyone who professes not to understand this proves only that he or she is not reflective).

2. Such a person can't reasonably be expected to see a social scientist's opinion survey of natural scientists or literature survey of peer-reviewed articles as settling the matter. In constructing the sample for such a survey, the social scientist has to make a judgment about which scientists or which scientific papers to include in the sample. Evaluating the adequacy of the sample-inclusion criteria used for that purposes will confront a reasonable person with issues as open to dispute as the ones that he or she would have had to resolve to assess the weight and practical significance of scientific evidence on climate change. Indeed, many of the issues will be exactly the same.

3. However, a reasonable person would see an index of securities (and like instruments) the value of which depend on global warming actually occurring  as helpful evidence in such circumstances. Market actors are economically, not ideologically motivated. Moreover, cognitive biases are likely to cancel out, leaving only the signal associated with informed assessments, by multiple rational and self-interested actors, of the weight and practical importance of the best available evidence on climate. Indeed, such a person could observe movement in the value of such instruments in relation to the publication of scientific papers or the issuances of IPCC reports etc. as measures of the soundness of those scientific assessments.

Here's another thing:

If reasonable people see that other resonable people, including ones whose priors are different from theirs, are also willing to treat an index of  as a relevant source of evidence that gives them reason to adjust their priors in one way or another (& who don't make the science-illiterate mistake of thinking that 'evidence' "proves' things as opposed to supply reason for treating a hypothsis as more or less likely to be true than one otherwise woudl have estimated), they'll be able to observe evidence of how many people are willing to proceed in this open-minded way. 

That evidence not only allows them to adjust their priors about how many people are like that; it also supplies them, as emotional and moral reciprocators, w/ reason to contribute to the common good of being a person of exactly that sort, modeling for the rest of humanity how sensible people w/ different perceptions about a matter subject to empirical investigation should proceed.

Maybe this would catch on?

So let's listen to the money people and let them lead us into a love-filled, harmonious world.

BTW, if such an index already exists, I wouldn't be surprised. I'd be surprised if it didn't.  So anyone who knows where to find it, please speak up.  

The index, btw, has to consist in securities (and the like) that reflect economic opportunities created by global warming.

It cannot include economic opportunities created by government policies to promote carbon-reduction.  That market will reflect expectations about political forces, not natural ones (a matter that might be interesting but that isn't probative of beliefs in whether climate change will occur--only in what sorts of things will occur in democratic politics, which is governed by its own peculiar laws).

Please join the discussion -- in the comment thread for the "97% of insurance companies -- & hedge funds-- agree!" post.

Friday
May242013

More market consensus on climate change: 97% of insurance companies agree (& hedge funds too!)

This is by no means the only example of "market consensus" on climate change.  

 
At the same time that members of the insurance industry are taking action to mitigate their losses (by promoting adaptation; the "mitigate"/"adaptation" distinction is one of the many infelicities of climate-change speak) other commercial actors are eagerly leaping at the chance to profit from new economic opportunities, including ironically exploitation of oil reserves that can be accessed more readily as polar ice caps melt.

Why isn't this activity exploited more aggressively for communication by those trying to promote public engagement with climate change? Those who doubt the scientific consensus--either because they think it is being calculated incorrectly by social scientists who use one or another method to measure it or because they think climate scientists are biased by ideology, group think, or research-funding blandishments--presumably ought to find the opinion of market actors, who are putting their money where their mouth is (actually, they don't talk much; they are too busy investing), more probative?

The answer, I conjecture, tells us something about the motivations--mainly unconscious, of the cultural cognition sort--of those on both sides of the debate.

Too many climate-change advocates have a hard time seeing/using evidence of this sort because it involves mining insight (as it were; new mining opportunities are also being created by metling permafrost) from the rationality of market behavior, not to mention recognizing that climate change does in fact involve a balance of positive and negative effects, even if on balance it is negative.  

At the same time, too many climate skeptics are unwilling to acknowledge evidence of any sort--even the truth-corroborating price signal of self-interested market behavior!--that lends credence to the scientific underpinnings of those who are making the case for effective collective action to avoid the myriad welfare-threatening upshots of a warming earth. So this evidence doesn't register on them either.
Click me!
Might this be it?

If so, I suppose we should look on the bright side: the two sides are agreeing on something, even if it is simply to ignore one and the same piece of evidence on account of it not fitting their respective worldviews.
Wednesday
May222013

On the science communication value of communicating "scientific consensus": an exchange

So either (1) I am a genius in communication after all (P = 0.03), having provoked John Cook and Scott Johnson to offer thoughtful reflections by strategically feigning a haughty outburst (I acknowledge that I expressed my frustration in a manner that I am not proud of). Or (2) Cook & Johnson are sufficiently motivated by virtuous commitment to intellectual exchange to create one notwithstanding my bad manners (P = 0.97).  

I don’t propose we conduct any sort of experiment to test these competing hypotheses but instead just avail ourselves of our good fortune.

To enable them to have an expression of my position that admits of and is worthy of reasoned response, I’ve reduced the source of my exasperation/frustration with the Cook et al. study to 4 points.  John and Scott’s replies (reflecting their points of view as a scholar of science communication and a science journalist, respectively), follow. 

What should follow that, I hope, are additional reflections and insights from others in the “comments” thread.

Kahan:

1. Scholarly knowledge. The Cook et al. study, which in my view is an elegantly designed and executed empirical assessment, doesn’t meaningfully enlarge knowledge of the state of scientific opinion on climate change. The authors find that 97% of the papers published in peer-reviewed journals between 1991 and 2011 “endorsed” the “scientific consensus” view that human activity is a source of global warming. They report further that a comparable percentage of scientists who authored such papers took that position....

continue reading

Cook:

Many thanks to Dan Kahan for the opportunity to discuss this important (and fascinating) issue of communicating the scientific consensus. I fully concur with Dan’s assertion that we need to be evidence-based in how we approach science communication. Indeed, my PhD research is focused on the very issue of attitude polarization and the psychology of consensus. The Cultural Cognition project, particularly the paperCultural Cognition of Scientific Consensus, has influenced my experiment design. I’m in the process of analysing data that I hope will guide us towards effective climate communication.... 

continue reading

Johnson:

Let me preface this by laying out my biases. I’m thinking about more than just this study/story, though I did cover it. (So there’s that.) I like to cover new studies, and I’d rather not hear that the hard work I put in to that end is pointless, so I’m reacting to Dan’s opinion as it relates to media coverage of studies like this. As an educator with a science background, I also have deficit model motivations—even as I understand that buckets aren’t lining up to be filled and that many are equipped with strainers and sometimes check valves. I am still, in essence, a pourer of what I judge to be useful knowledge. If I didn’t think that was the case, I’m not sure why I’d be trying to communicate (unless it somehow made for lucrative reality television, I guess)....

Continue reading

Tuesday
May212013

Cultural resistance to the science of science communication

I’m in Norway. Just stepped off the plane in fact.

Am going to be giving an address at a conference sponsored by the Center for International Climate and Environmental Research in Oslo. The conference is for professional science communicators (mainly ones associated with universities), and the topic is how to promote effective public dissemination of and engagement with the IPCC's 5th Assessment Report, which will be released officially in October.

Obviously, I will stress that it all comes down to making sure the public gets the message that  the IPCC report reflects “scientific consensus.”

Actually, I will try to communicate something that is very hard to make clear.

When I have the opportunity (and privilege) to address climate scientists and professional science communicators, I often feel that I’m deflating them a bit by advising them that I don’t believe that what scientists say—independently of what they do—is of particular consequence in the formation of public opinion. The average American can’t name a Supreme Court Justice. Say “James Hansen” and he or she is more likely to select “creator of the Muppets” than “climate scientist” on a multiple choice quiz.  Anyone who thinks things could or should be otherwise, moreover, doesn’t have a clue what it is like to be a normal, average, busy person.

There are some genuinely inspired citizen scientist communicators in our society. But to expect them to bear the burden of fixing the science communication problem betrays a naïve—and pernicious—model of how science is communicated.

What’s known to science becomes known to ordinary people—ones to whom what science knows can in fact be quite vital—through a dense network of cultural intermediaries. Moreover, in pluralistic liberal democracies (which are in fact the only types of society in which science can flourish), there will necessarily be a plurality of such networks operating to inform a diverse array of groups whose members share distinctive cultural commitments.

These networks by and large all do a great job. Any that didn’t—any that consistently misled its members about what’s known to science—wouldn’t last long, given the indispensable contribution scientific knowledge makes to human welfare.

The spectacle of cultural conflict over what’s known to science is a pathology—both in the sense of being inimical to human well-being and in the sense of being rare. The number of health- and policy-relevant scientific insights on which there is conflict akin to that over climate science is miniscule relative to the vast number on which there isn’t.

Something has to happen—something unusual—to invest a particular belief about some otherwise mundane issue of fact with cultural meanings that express one’s membership in and loyalty to a particular group.

But once that happens, the value that an ordinary member of the public gets from persisting in a belief that signifies his or her group commitments will likely far outweigh any personal cost from being mistaken. Clearly this is so for climate change: nothing an ordinary person believes about the science of climate change will have any impact on the climate—or any impact on policies to offset any adverse impact human activity might be having on it—because he or she just doesn’t matter enough (as consumer, as voter, as “public deliberator”) to have any impact; if he or she takes the “wrong” position relative to the one that signifies loyalty to his or her cultural group, and the amount of suffering that person has to endure can be immense.

The pathology of cultural conflict over a societal risk like climate change can’t be effectively treated, then, by radiating the patient with a bombardment of “facts.”

It can be treated only with the creation of pluralistic meanings. What needs to be communicated is that the facts on climate change, whatever they might be, are perfectly consistent with the cultural commitments of all the diverse groups that inhabit a pluralistic liberal democracy.  No one has to choose between believing them (or believing anything whatsoever about them) and being who one is as a person with a particular cultural identity.

As I said, communicating this point about science communication is difficult.  Not so much because the ideas or the concepts—or the evidence that shows they are more than a just-so story—are all that hard to explain.

The problem has to do with a kind of cultural resistance to the message that communicating science is about protecting the conditions in which the natural, spontaneous social certification of truth can be expected to happen.

The culture that resists this message, moreover, is not that of “hierarchical individualists” or “egalitarian communitarians.”

It’s the culture of the Liberal Republic of Science, of which we are all citizens.

Nullius in verba.  It’s so absurd! Yet so compelling. So much who we are.


Sunday
May192013

What is to be done? 

A thoughtful commentator sent me this email:

I was reading through the sublinks [in Andy Revkin's "The Other Science Gap" column] with interest tonight, but also growing frustration-- as in I can understand and agree with you and others focusing on the role partisanship and social cognitive barriers play, but I am a guy who lives in the trenches and wants to know--are there any solutions? I urged my climate law students this month to be advocates and not give up despite all the pessimistic news, and I keep speaking out at conferences and in articles on steps to get more clean energy more quickly--but it often seems like way too little and increasingly too late. What do you say to students and other young people about how to work to change climate change's momentum and trends?

By way of background (just a tiny bit) , the occasion for the query is the "here we go again" exasperated response to the new study that corroborates years and years of previous studies finding that there is a scientific consensus -- consistently calculated by a variety of methods as 97% of scientists, peer-reviewed articles, etc.--that human activity is the cause of climate change.

The exasperation, of course,  is not over the content of the study; it is over the fallacious inference that communicating the "97% of scientists believe ..." message is an effective way to dispel public controversy over climate change.  

If it were, then the controversy would have been solved by now.  "Scientific consensus" has been the dominant theme of climate communication for the better part of a decade.  And cultural polarization over that time has not abated--it has only intensified.

Empirical studies aimed at trying to make sense of this phenomenon have concluded that the reason the public remains divided on “scientific consensus” isn’t that they haven’t been exposed to evidence on the matter but rather that when they are exposed to evidence of what experts believe they selectively credit or discredit it in patterns that reflect and reinforce their perception that scientific consensus is consistent with the position that predominates in their cultural or ideological group

The exuberance with which the latest "97%" study has been greeted by many of those who want to promote constructive engagement with climate science reflects a distressing resistance to take in the more general "scientific consensus" that exists among science of science communication researchers that neither a deficit in knowledge of facts -- ones relating to the science of climate as well as ones relating to the extent of scientific consensus -- nor a deficit in the ability to make sense of scientific information is the source of continuing conflict over climate change.  Indeed, members of the public who are the most science literate and numerate are the most polarized.

But for those who are willing to open their eyes and unblock their ears to the real-world and social-scientific evidence that a public knowledge/rationality deficit is not the problem, the question is then put, as it is by the commentator: so what is to be done?

The answer is all kinds of things. Or in any case, the same research that supports the conclusion that "fact bombardment" doesn't work is filled with findings of alternatives that work better in promoting constructive open-minded engagement with scientific information. By adroitly combining valid information with culturally affirming meanings, these communications succeed in getting people to reflectively assess evidence that they might otherwise dismiss out of hand (btw, if your goal is not simply to get people to open-mindedly consider evidence using their own powers of reason -- if you just want to make them believe something, who cares how-- you are not a science communicator; you are a propagandist).

That some think that continuing to hammer skeptics over the head with "scientific consensus" -- a style of advocacy that is more likely to intensify opposition, research shows, then ameliorate it -- because there is no alternative is part and parcel of the same puzzling evidence-resistance that explains the continuing allure of the "knowledge/rationality deficit" theory of science communication.

Actually, there are plenty of science communicators who are aware of this research and who make skillful use of it.  Katharine Hayhoe and Geoffrey Haines-Stiles, and George Marshall are among them. So one piece of advice: check out what they are doing and  try to figure out how to adapt and extend it.

But here's another piece of advice: use scientific methods to test and refine communication strategies.

It's ironic that it's necessary to say this.  But it is.  It really really really is.

Not only do too many science communicators ignore evidence about what does and doesn't work.  Way way too many also shoot from the hip in a completely fact-free, imagination-run-wild way in formulating communication strategies.

If they don't rely entirely on their own personal experience mixed with introspection, they simply reach into the grab bag of decision science mechanisms (it's vast), picking and choosing, mixing and matching, and in the end presenting what is really just an elaborate just-so story on what the "problem" is and how to "solve" it.  

That's not science. It's pseudo-science.

As with most complicated matters in human affairs, there are more plausible conjectures about what the problem is then can possibly be true.  Use of disciplined methods of observation and inference to test rival hypotheses (such as the "knowledge deficit" theory vs. "motivated reasoning," of which "cultural cognition"is a form).

But once one has used evidence-based methods to identify mechanisms that plausibly can be understood to be generating the problem, there will still be more plausible conjectures than can be true about what sort of communication strategies can be used to neutralize or turn those mechanisms around in a way that promotes constructive engagement.

The only way to extricate the latter from the vast sea of the former is through more evidenced-based methods, ones aimed at reproducing in the field effects observed in the lab.  Unless we use science to identify how to communicate science, we will drown in an ocean of just-so story-telling.

Those who are willing to consider real evidence on what works and what doesn't will find many answers to the "what is to be done?" question in the science of science communication.

But it is important for them to recognize that the most important thing that that science has to tell them is not what to do (indeed, be wary of cartoonish "how to" communication "manuals").

It's how to do it: by the formulation, testing, analysis, and revision of evidence-informed hypotheses.

Or simply put, by being scientific about communicating science.

 

 

Saturday
May182013

More conversation -- & an announcment of my commitment to the same

There are a lot of interesting conversations going in the comments section following my post on the new study on the extent of scientific consensus on climate change.

Indeed, it's all much more interesting than anything I "said" in the post, which I think was deficient (particularly in the material before the "update" field) in the quantity of reasoned reflection, and the quality of constructive engagement, that usually are necessary to get a worthwhile exchange of views going. So thanks to the commentators for supplying those materials.

I will have more to say, in the comments & in a follow up post.  But there's one point that I do want to make now & to "elevate" in effect.  

It's that I regard the authors of the scientific consensus study as serious scholars whose work is motivated by a very appropriate synthesis of scholarly and public aims. I think it's likely they and I disagree about certain issues relating to science communication. But if so, those are the sorts of disagreements that people with a shared commitment to understanding complicated matters are bound to have; indeed, they are the sorts of disagreements that are the occasion for reasoned exchange among those who recognize that their common interest in gaining knowledge is best advanced by the dialectic of conjecture and refutation that is the signature of scientific inquiry.

No one should regard the manner in which I expressed myself as implying that I regard the authors as people whom I see as unworthy of being engaged in exactly that way.  If anyone did get that impression from how I expressed myself, then what he or she should infer is that I am not always as discriminating as I ought to be in judging the counsel of my passions.

Friday
May172013

Annual "new study" finds 97% of climate scientists believe in man-made climate change; public consensus sure to follow once news gets out

Hey! Did you hear? A new study shows that 97% of scientists believe that human activity is responsible for climate change!

We all need to be sure this new information gets reported far and wide -- not only because it is genuinely newsworthy, a true addition to what's known about the state of scientific opinion -- but also because public unawareness of this degree of consensus surely explains cultural polarization over climate change.

The ugly, demeaning, public-welfare-enervating debate will be over soon!

Why didn't anyone think of telling the public about this before now?!

 

 

Wednesday
May152013

Motivated reasoning & its cognates

The following is an excerpt from Kahan, D.M. Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1-77 (2011). I thought it might be useful to reproduce it here, both for its own sake and for reference (via hyperlink) in future blog entries, since  many of the concepts it describes are recurring ones in my posts. This entry contains a modest number of hyperlinks; the printed version (accessible via SSRN), is amply footnoted! 

1.  Generally. Motivated reasoning refers to the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.  They Saw a Game, a classic psychology article from the 1950s, illustrates the dynamic.  Experimental subjects, students from two Ivy League colleges, were instructed to watch a film that featured a set of controversial officiating calls made during a football game between teams from their respective schools.  What best predicted the students’ agreement or disagreement with a disputed call, the researchers found, was whether it favored or disfavored their schools’ team.  The researchers attributed this result to motivated reasoning: the students’ emotional stake in affirming their commitments to their respective institutions shaped what they saw on the tape.

The end or goal motivates cognition in the sense that it directs mental operations — in this case, sensory perceptions; in others, assessments of the weight and credibility of empirical evidence, or performance of mathematical or logical computation — that we expect to function independently of that goal or end.  Indeed, the normal connotation of “motive” as a conscious goal or reason for acting is actually out of place here.  The students wanted to experience solidarity with their institutions, but they didn’t treat that as a conscious reason for seeing what they saw.  They had no idea (or so we are to believe; one needs a good experimental design to be sure this is so) that their perceptions were being bent in this way.

Although the students in this study probably would not have been distressed to learn that their perceptions had been covertly recruited by their desire to experience solidarity, there can be other contexts in which motivated cognition subverts an actor’s conscious ends.  This might be so, for example, when a person who genuinely desires to be make a fair or accurate judgment is unwittingly impelled to make a determination that favors some personal interest, pecuniary or social.

2.  Identity-Protective Cognition. The goals or needs that can motivate cognition are diverse.  They include fairly straightforward things, like a person’s financial or related interests.  But they reach more intangible stakes, too, such as one’s need to sustain a positive self-image or the desire to promote states of affairs or other goods that reflect one’s moral values.

Affirming one’s membership in an important reference group — the unconscious influence that operated on the students in the They Saw A Game experiment — can encompass all of these ends simultaneously.  Individuals depend on select others — from families to university faculties, from religious denominations to political parties — for all manner of material and emotional support.  Propositions that impugn the character or competence of such groups, or that contradict the groups’ shared commitments, can thus jeopardize their individual members’ well-being.  Assenting to such a proposition him- or herself can sever an individual’s bonds with such a group.  The prospect that people outside the group might credit this proposition can also harm an individual by reducing the social standing or the self-esteem that person enjoys by virtue of his or her group’s reputation.  Individuals thus face psychic pressure to resist propositions of that sort, generating a species of motivated reasoning known as identity-protective cognition.

Identity-protective cognition, like other forms of motivated reasoning, operates through a variety of discrete psychological mechanisms.  Individuals are more likely to seek out information that supports than information that challenges positions associated with their group identity (biased search).  They are also likely selectively to credit or dismiss a form of evidence or argument based on its congeniality to their identity (biased assimilation).  They will tend to impute greater knowledge and trustworthiness and hence assign more credibility to individuals from within their group than from without.

These processes might take the form of rapid, heuristic-driven, even visceral judgments or perceptions, but they can influence more deliberate and reflective forms of judgment as well.  Indeed, far from being immune from identity-protective cognition, individuals who display a greater disposition to use reflective and deliberative (so-called “System 2”) forms of reasoning rather than intuitive, affective ones (“System 1”) can be expected to be even more adept at using technical information and complex analysis to bolster group-congenial beliefs.

3.  Naïve Realism. Identity-protective cognition predictably impedes deliberations, negotiations, and like forms of collective decisionmaking.  When collective decisionmaking turns on facts or other propositions that are understood to bear special significance for the interests, standing, or commitments of opposing groups (for example, those who identify with the respective sides in the Israel-Palestine conflict), identity-protective cognition will predictably exaggerate differences in their understandings of the evidence.  But even more importantly, as a result of a dynamic known as “naïve realism,” each side’s susceptibility to motivated reasoning will interact with and reinforce the other’s.

Naïve realism refers to an asymmetry in the ability of individuals to perceive the impact of identity-protective cognition.  Individuals tend to attribute the beliefs of those who disagree with them to the biasing impact of their opponents’ values.  Often they are right.  In this respect, then, people are psychological “realists.”  Nevertheless, in such situations individuals usually understand their own factual beliefs to reflect nothing more than “objective fact,” plain for anyone to see.  In this regard, they are psychologically naïve about the contribution that group commitments make to their own perceptions.

Naïve realism makes exchanges between groups experiencing identity-protective cognition even more divisive.  The (accurate) perception that a rival group’s members are reacting in a closed-minded fashion naturally spurs a group’s members to express resentment — the seeming baselessness of which provokes members of the former to experience and express the same.  The intensity, and the evident polarization, of the disagreement magnifies the stake that individuals feel in defending their respective groups’ positions.  Indeed, at that point, the debate is likely to take on meaning as a contest over the integrity and intelligence of those groups, fueling the participants’ incentives, conscious and unconscious, to deny the merits of any evidence that undercuts their respective views.

4.  “Objectivity.” As naïve realism presupposes, motivated reasoning is an instance of what we commonly recognize as rationalization.  We exhort others, and even ourselves, to overcome such lapses — to adopt an appropriate stance of detachment — in settings in which we believe impartial judgment is important, including deliberations or negotiations in which vulnerability to self-serving appraisals can interfere with reaching consensus.  What most people don’t know, however, is that such admonitions can actually have a perverse effect because of their interaction with identity-protective cognition.

This is the conclusion of studies that examine whether motivated reasoning can be counteracted by urging individuals to be “objective,” “unbiased,” “rational,” “open-minded,” and the like.  Such studies find that individuals who’ve been issued this type of directive exhibit greater resistance to information that challenges a belief predominant within their defining groups.  The reason is that objectivity injunctions accentuate identity threat.  Individuals naturally assume that beliefs they share with others in their defining group are “objective.”  Accordingly, those are the beliefs they are most likely to see as correct when prompted to be “rational” and “open-minded.”  Indeed, for them to change their minds in such a circumstance would require them to discern irrationality or bias within their group, an inference fraught with dissonance.

For the same reason, emphasizing the importance of engaging the issues “objectively” can magnify naïve realism.  As they grow even more adamant about the correctness of their own group’s perspective, individuals directed to carefully attend to their own impartiality become increasingly convinced that only unreasoning, blind partisanship can explain the intransigence of the opposing group.  This view triggers the reciprocal and self-reinforcing forms of recrimination and retrenchment that are the signature of naïve realism.

5.  Cultural Cognition. Disputes set in motion by identity-protective cognition and fueled by naïve realism occupy a prominent place in our political life.  Such conflicts are the focus of the study of cultural cognition.

Cultural cognition refers to the tendency of individuals to conform their perceptions of risk and other policy-consequential facts to their cultural worldviews.  Cultural worldviews consist of systematic clusters of values relating to how society should be organized.  Arrayed along two cross-cutting dimensions — hierarchy/egalitarianism and individualism/communitarianism — these values supply the bonds of affinity groups, membership in which motivates identity-protective cognition.  People who subscribe to a relatively hierarchical and individualistic worldview, for example, tend to be dismissive of environmental risk claims, acceptance of which would justify restrictions on commerce and industry, activities they value on material and symbolic grounds.  Individuals who hold egalitarian and communitarian values, in contrast, are morally suspicious of commerce and industry, which they see as sources of social disparity and objects of noxious self-seeking.  They therefore find it congenial to believe that commerce and industry pose harms worthy of constraining regulations.  Experimental work has documented the contribution of cultural-cognition worldviews to various discrete mechanisms of motivated cognition, including biased search and assimilation, perceptions of expertise and credibility, and brute sense impressions.

Methods of cultural cognition have also been used to measure controversy over legally consequential facts.  Thus, mock jury studies have linked identity-protective cognition, motivated by the cultural worldviews, to conflicting perceptions of the risk posed by a motorist fleeing the police in a high-speed chase; of the consent of a date rape victim who said “no” but did not physically resist her assailant; of the volition of battered women who kill in self-defense; and of the use of intimidation by political protestors.  To date, however, no studies have directly tested the impact of cultural cognition on judges.

6.  Cognitive Illiberalism. Finally, cognitive illiberalism refers to the distinctive threat that cultural cognition poses to ideals of cultural pluralism and individual self-determination.  Americans are indeed fighting a “culture war,” but one over facts, not values.

The United States has a genuinely liberal civic and political culture — born not of reflective commitment to cosmopolitan ideals but of bourgeois docility.  Media spectacles notwithstanding, citizens generally don’t have an appetite to impose their worldviews on one another; they have an appetite for SUVs, big houses, and vacations to Disneyland (or Las Vegas).  Manifested in the absence of the sectarian violence that has filled human history and still rages outside the democratic capitalist world, there is effective consensus that the state should refrain from imposing a moral orthodoxy and confine policymaking to attainment of secular goods — safety, health, security, and prosperity — of value to all citizens regardless of their cultural persuasion.

As much as they agree about the ends of law, however, citizens are conspicuously — even spectacularly — factionalized over the means of attaining them.  Is the climate heating up as a result of human activity, and if so will it pose any dangers to us?  Will permitting citizens to carry concealed handguns in public increase violent crime — or reduce it?  Would a program of mandatory vaccination of schoolgirls against HPV promote their health by protecting them from cervical cancer — or undermine it by lulling them into unprotected sex, increasing their risk of contracting HIV?  Answers to questions like these tend to sharply polarize people of opposing cultural outlooks.

Divisions along these lines are not due to chance, of course; they are a consequence of identity-protective cognition.  The varying emotional resonance of risk claims across distinct cultural communities predisposes their members to find some of these claims more plausible than others, a process reinforced by the tendency of individuals to seek out and credit information from those who share their values.

Far from counteracting this effect, deliberation among diverse groups is likely to accentuate polarization.  By revealing the correlation between one or another position and one or another cultural style, public debate intensifies identity-protective pressure on individuals to conform to the views dominant within their group.

Liberal discourse norms constrain open appeals to sectarian values in debates over the content of law and policy.  But our political culture lacks any similar set of conventions for constraining the tendency of policy debates to build into rivalries among the members of groups whose members subscribe to competing visions of the best life.  On the contrary, one of the central discourse norms employed to steer law and policymaking away from illiberal conflicts of value plays a vital role in converting secular policy debates into forms of symbolic status competition.

The injunction of liberal public reason makes empirical, welfarist arguments the preferred currency of argumentative exchange.  The expectation that participants in public deliberations will use empirical arguments tends to confine their advocacy to secular ends; it also furnishes observable proof to the advocate and her audience that her position is not founded on an ambition to use the law to impose her own partisan view of the good.

Psychologically, however, the injunction to present culturally neutral empirical grounds for one’s position has the same effect as an “objectivity” admonition.  The prospect that one’s empirical arguments will be shown to be false creates the identity-threatening risk for her that she or others will come to form the belief that her group is deluded and, in fact, committed to propositions inimical to the public welfare.  In addition, the certitude that empirical arguments convey — “it’s simply a fact that . . . ”; “how can they deny the scientific evidence on . . . ?” — arouses suspicions of bad faith or blind partisanship on the part of the groups advancing them.  Yet when members of opposing groups attempt to rebut such arguments, they are likely to respond with the same certitude, and with the same lack of awareness that they are being impelled to credit empirical arguments to protect their identities.  This form of exchange — the signature of naïve realism — predictably generates cycles of recrimination and resentment.

When policy debates take this turn, both sides know that the answers to the questions they are debating convey cultural meanings.  The positions that individuals take on whether the death penalty deters, whether deep geologic isolation of nuclear wastes is safe, whether immigration reform will boost the economy or put people out of work, and the like express their defining commitments and not just their beliefs about how the world works.  Whose answer the state credits — by adopting one or another policy — elevates one cultural group and degrades the other.  Very few citizens are moral zealots.  But to protect the status of their group and their own standing within it, moderate citizens are conscripted, against their conscious will, into a divisive struggle to control the expressive capital of law.

Monday
May132013

Bolsen, Druckman & Cook working paper addresses critical issue in Science of #Scicom: What triggers public conflict over policy-relevant science?

Here's something people interested in the science of science communication should check out:

Bolsen, T., Druckman, J. & Cook, F.L. The Effects of the Politicization of Science on Public Support for Emergent Technologies. Institute for Policy Research Northwestern University Working Paper Series, WP-13-11 (May 1, 2013). 

The paper presents an interesting study on how exposure to information on the existence of political conflict affects public attitudes toward policy-relevant science, including the interaction of such exposure to information on "scientific conensus."
 
I think this is exactly the sort of research that's needed to address the "science communication problem." That's the term I use to refer to the failure of valid and widely accessible science to quiet public controversy over policy-relevant facts (including risks) to which that evidence directly speaks.
 
Most of the research in this area examines how to dispel such conflict.  Likely this is a consequence of the salience of the climate change controversy and the impact it has had in focusing attention on the "science communication problem" and the need to integrate science-informed policymaking with the science of science communication. 


But as I've emphasized before, the focus on resolving such conflict risks diverting attention from what I'd say is the even more important question of how the "science communication problem" takes root. 

The number of issues that display the science communication problem's signature form of cultural (or political) polarization is very small relative to the number of issues that could. Something explains which issues end up afflicted with this pernicious pathology and which don't. 

If we can figure out what triggers the problem, then we can examine how to avoid it. That's a smart thing to do, becaues it might well be easier to avoid cultural polarization than to vanquish it once it sets in. 

For an illustration, consider the HPV vaccine.  As I've explained previously, the conditions that triggered the science communication problem there could easily have been anticipated and avoided. The disaster that occurred in the introduction of the vaccine stunningly illustrate the cost of failing systematically acquire and use the insight that the science of science communication can afford. 

The BDC paper is thus really heartening, because it focuses exactly on the "anticipation/avoidance" objective. It's the sort of research that we need to devise an effective science communication environment protection policy

I'll say more about the substance of the studey on another occasion, likely in connection with a recap of my Science of Science Communication course's sessions on emerging technology (which featured another excellent Druckman/Bolsen study). 

But if others want to say what they think of the study -- have at it!

Thursday
May092013

Is disgust "conservative"? Not in a Liberal society (or likely anywhere else)

This is a popular theme.

It is associated most prominently with the very interesting work of Jonathan Haidt, who concludes that "disgust" is characteristic of a "conservative" psychological outlook that morally evaluates behavior as intrinsically appropriate or inappropriate as opposed to a liberal one that focuses on "harm" to others.

Martha Nussbaum offers a similar, and similarly interesting account, portraying "disgust" as a sensibility that ranks people (or ways of living associated with them) in a manner that is intrinsically hierarchical.  Disgust has no role to play in the moral life of a modern democratic citizen, she concludes. 

But I can't help but thinking that things are slightly more complicated -- and as a result, possibly much more interesting! -- than this.

Of course, I'm thinking about this issue because I'm at least momentarily obsessed with the role that disgust is playing in public reactions to the death of a 2-year-old girl in Kentucky, who was shot by her 5-year-old brother who was "playing" with his "Crickett," a miniaturized but authentic and fully operational .22 caliber rifle marketed under the slogan "my first gun!"

The Crickett disgusts people. Or so they say-- over & over. And I believe them. I believe not only that they are experiencing a "negative affective reaction" but that what they are feeling is disgust.  Because I am experiencing that feeling, too, and the sensibility really does bear the signature elements of disgust.

I am sickened by the images featured in the manufacturer's advertising: the beaming, gap-toothed boy discovering a Crickett when he tears open a gift-wrapped box (likely it is his birthday; "the first gun" ritual is the "bar mitzvah of the rural Southern WASP," although he is at least 3 yrs south of 13); the determined elementary school girl taking aim with the model that has the pink faux-wood stock; the envious neighbor boy ("I wish I had one!"), whose reaction is geared to fill parents with shame for putting their son at risk of being treated as an outcast (yes, their son; go ahead & buy your tomboy the pink-stock Crickett, but if she prefers, say, to make drawings or to read about history, surely she won't be mocked and derided).

These images frighten me. They make me mad.  And they also truly—literally—turn my stomach.

I want to bury the Crickett, to burn it, destroy it. I want it out of my sight, out of anyone's, because I know that it--and what it represents--can contaminate the character, corrupt it.

I'm no "conservative" and neither is anyone else whom I observe (they are all over the place) expressing disgust toward the Crickett.

But of course, this doesn’t mean "liberals" (am I one? I suppose, though what passes for “liberal” in contemporary political discourse & a lot of scholarly discourse too is so philosophically thin and so historically disconnected that it demeans a real Liberal to see the inspired moral outlook he or she has inherited made to bear the same label. More on that presently) have forgotten the harm principle.

The harm guns cause to others-- just look at the dead 2 yr old girl in Kentucky, for crying out loud!--not the "disgust" they feel toward them is the reason they want to ban—restrict them!

Yes, and it's why they have historically advocated strict regulation (outright banning, if possible) of swimming pools, which are orders of magnitude more lethal for children . . . .

And why President Obama is trying so hard to get legislation passed that would get America out of the "war on drugs," the collateral damage of which includes many, many times more kids gunned down in public than died in Newtown. . . .

Look:  “liberals” want to enact background checks, ban assault rifles, prohibit carrying concealed handguns because they truly, honestly believe that these measures will reduce harm.

But they truly, honestly believe these things--despite the abundant evidence that such measures will have no meaningful impact on homicide, and are certain to do less than many many other things they ignore -- because they are disgusted by guns. 

We impute harm to what disgusts us; and we are disgusted by behavior that violates the moral norms that we hold in common with others and that define our understanding of the best way to live.

The "we" here, moreover, is not confined to "liberals."  

"Conservatives" are in the same motivated-reasoning boat. They are "disgusted" by all kinds of things--drugs, homosexuality, rap music (maybe even drones!).  But they say we should "ban"/"control" etc. such things because of the harms they cause.  

It's not characteristic of ordinary people who call themselves "conservatives"  that they see violation of "sacred" norms as a ground for punishing people independently of harm. Rather it's characteristic of them to see harm in what disgusts them. Just as "liberals" do! 

The difference between "liberals" and "conservatives" is in what they find disgusting, and hence what they see as harmful and thus worthy of legal restriction.

Or at least that is what many thoughtful scholars -- like Mary Douglas, William Miller, Roger Giner-Sorrolla, among others.

Our study of cultural cognition is, of course, inspired by this basic account, and although we haven't (so far) attempted to include observation and measurement of disgust or other identifiable moral sensibilities in our studies, I think our results are more in keeping with this position than with any that sees "conservativism" as uniquely bound up with "disgust" -- or with any that tries to explain the difference in the perceptions of risk of ordinary people with reference to moral styles that consciously place varying degrees of importance on "harm."

I wouldn't say, of course, that the Haidt-Nussbaum position (let's call it) has been "disproven" etc.  This work is formidable, to say the least! Whether there are differences in the cognitive and emotional processes of "liberals" and "conservatives" (as opposed to differences in the norms that orient those processes) is an important, difficult question that merits continued thoughtful investigation.

Still, it is interesting to reflect on why accounts that treat "liberals" as concerned with "harm" and "conservatives," alone, as concerned with or motivated by "disgust" are as popular as they are—not among psychologists or others who are able and who have made the effort to understand the nature of the evidence here but among popular consumers of such work who take the “take away” of it uncritically, without reflection on the strength of the evidence or cogency of the inferences to be drawn from it (this is sad; it is a reflection of a deficit in ordinary science intelligence).

Here's a conjecture: because we are all Liberals.  

I’m not using the term “Liberal” in this sense to refer to points to the left of center on the 1-dimensional right-left spectrum that contemporary political scientists and psychologists use to characterize popular policy preferences.

The Liberalism I have in mind refers to a distinctive understanding of relationship between the individual and the state. What’s distinctive about it, in fact, is that individuals comes first. The apparatus of the state exists to secure the greatest degree of equal liberty for individuals, who aside from their obligation to abide by laws that serve that end must be respected as free to pursue happiness on terms of their own choosing.

The great mass of ordinary people who call themselves “conservatives” in the US (and in Australia, in the UK, in France, Germany, Canada . . .) are as committed to Liberalism in this sense as are those call themselves “liberals” (although in fact, the great mass of people either don’t call themselves “conservative” or “liberal” or, if they do, don’t really have any particular coherent idea of what doing so entails). They are so perfectly and completely committed to Liberalism that they can barely really conceive of what it would look like to live in a political regime with a different animating principle.

The currency of disgust is officially valueless in the Liberal state’s economy of political justification. Under the constitution of the Liberal State, the offense one group of citizens experience in observing or knowing that another finds satisfaction in a way of life the first finds repulsive is not a cognizable harm.

We all know this—better, just are this, whether or not we “know” it; it’s in the nature of a political regime to make its animating principle felt even more than “understood.” And we all honestly believe that we are abiding by this fundamental principle when we demand that behavior that truly disgusts us—the practice of same-sex or polygamous marriage, the consumption of drugs, the furnishing of a child with a “Crickett,” and the like—be prohibited not because we find it revolting but because it is causing harm.

As a result, the idea that we are unconsciously imputing “harm” selectively to what disgusts us (or otherwise offends sensibilities rooted not in our commitment to avoiding harm to others but in our commitment to more culturally partisan goods) is unsettling, and like many unsettling things a matter we tend to discount.

At the same time, the remarkable, and everywhere perfectly obvious congruence of the disgust sensibilities and perceptions of harm formed by those who hold cultural and political commitments different from our own naturally suggests to us that those others are either attempting to deceive us or are in fact deceiving themselves via a process of unconscious rationalization.

This is in fact a process well known to social psychology, which calls it “naïve realism.”  People are good at recognizing the tendency of those who disagree with them to fit their perceptions of risk and other facts related to contested policy issues to their values and group commitments. Ordinary people are realists in this sense. At the same time, they don’t readily perceive their own vulnerability to the very same phenomenon. This is the naïve part!

Here, then, people with “liberal” political outlooks can be expected to credit work that tells them that “conservatives” are uniquely illLiberal—that “conservatives,” as opposed to “liberals,” are consciously or unconsciously evaluating behavior with a morality that is guided by disgust rather than harm.

All of this is separate, of course, from whether the work in question is valid or not. My point is simply that we can expect findings of that sort to be accepted uncritically by those whose cultural and political predispositions it gratifies.

Would this be so surprising?  The work in question, after all, is itself applying the theory of “motivated cognition,” which predicts this sort of ideologically selective assessment of the strength of empirical evidence.

Still, that motivated reasoning would generate, on the part of the public, an ideological slant in the disposition to credit evidence that ilLiberal sensibilities disproportionately guide the moral judgments of those whose ideology one finds abhorent (disgusting, even) is, as I indicated, only a conjecture. 

In fact, I view the experiment that I performed on cognitive reflection, ideology and motivated reasoning as effectively modeling this sort of process. 

But like all matters that admit of empirical assessment, the proposition that ideologically motivated reasoning will create support for the proposition that aspects of it—including the cognitive force of “disgust” in orientating perceptions of harm—is ideologically or culturally asymmetric is not something that can be conclusively established by a single empirical study—indeed, is not something that can ever be “conclusively” settled but rather a matter on which beliefs must always be regarded as provisional and revisable in light of whatever the evidence might show.

In the meantime, we can enjoy the excellent work of scholars like Haidt and Nussbaum, and the competing positions of theorists and empiricists like Miller, Douglas, and Giner-Sorrolla, as compensation for having to enduring the depressing spectacle of cultural polarization over matters like guns, climate change, nuclear power, the HPV vaccine, drugs, unorthodox sex practices. . . etc. etc.

(Some) references:

Douglas, M. Purity and danger; an analysis of concepts of pollution and taboo. (Praeger, New York,; 1966).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing Under Defense Motivation. Pers Soc Psychol B 23, 84-97 (1997).

Giner-Sorolla, R., Chaiken, S. & Lutz, S. Validity beliefs and ideology can influence legal case judgments differently. Law Human Behav 26, 507-526 (2002).

Graham, J., Haidt, J. & Nosek, B.A. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96, 1029-1046 (2009).

Gutierrez, R. & Giner-Sorolla, R. Anger, disgust, and presumption of harm as reactions to taboo-breaking Behaviors. Emotion 7, 853-868 (2007).

Haidt, J. & Graham, J. When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals may not Recognize. Social Justice Research 20, 98-116 (2007). 


Haidt, J. & Hersh, M.A. Sexual morality: The cultures and emotions of conservatives and liberals. J Appl Soc Psychol 31, 191-221 (2001). 

Horvath, M.A.H. & Giner-Sorolla, R. Below the age of consent: Influences on moral and legal judgments of adult-adolescent sexual relationships. J Appl Soc Psychol 37, 2980-3009 (2007).

Kahan, D. Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study. CCP Working Paper No. 107 (2012).  

Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007). 

Kahan, D.M. The Progressive Appropriation of Disgust, in Critical America. (ed. S. Bandes) 63-79 (New York University Press, New York; 1999). 

Miller, W.I. The Anatomy of Disgust. (1997).

Nussbaum, M.C. Hiding from humanity: Disgust, Shame, and the Law. (Princeton University Press, Princeton, N.J.; 2004).

Robinson, R.J., Keltner, D., Ward, A. & Ross, L. Actual Versus Assumed Differences in Construal: "Naive Realism" in Intergroup Perception and Conflict. J. Personality & Soc. Psych. 68, 404-417 (1995).

Sherman, D.K., Nelson, L.D. & Ross, L.D. Naïve Realism and Affirmative Action: Adversaries are More Similar Than They Think. Basic & Applied Social Psychology 25, 275-289 (2003).

 

p.s. checkout the great bibliography of writings by the talented and prolific psychologist Yoel Inbar.

 

Page 1 ... 7 8 9 10 11 ... 23 Next 20 Entries »