follow CCP

Recent blog entries
Wednesday
Jul102013

Fooled twice, shame on who? Problems with Mechanical Turk study samples, part 2

From Mueller, Chandler, & Paolacci, Soc'y for P&SP, 1/28/12This is the second post in a two-part series on what I see as the invalidity of studies that use samples of Mechanical Turk workers to test hypotheses about cognition and political conflict over societal risks and other policy-relevant facts.

In the first, I discussed the concept of a “valid sample” generally.  Basically, I argued that it’s a mistake to equate sample “validity” with any uniform standard or any single, invariant set of recruitment or stratification procedures.

Rather, the validity of the sample depends on one thing only: whether it supports valid and reliable inferences about the nature of the psychological processes under investigation.

College student samples are fine, e.g., if the dynamic being studied is reasonably understood to be uniform for all people.

A nonstratified general population sample will be perfectly okay for studying processes that vary among people of different characteristics so long as (1) there are enough individuals from subpopulations whose members differ in the relevant respect and (2) the recruitment procedure didn’t involve methods that might have either discouraged participation by typical members of those groups or unduly encouraged participation by atypical ones.

Indeed, a sample constructed by methods of recruitment and stratification designed to assure “national representativeness” might not be valid (or at least not support valid inferences) if the dynamic being studied varies across subgroups whose members aren’t represented in sufficient number to enable testing of hypotheses relating specifically to them.

Etc.

Now I will explain why, on the basis of this pragmatic understanding of what sample validity consists in, MT samples aren’t valid for the study of culturally or ideologically grounded forms of “motivated reasoning” and like dynamics that it is reasonable to believe account for polarization over climate change, gun control, nuclear power, and other facts that admit of empirical study.

I don’t want to keep anybody in suspense (or make it necessary for busy people to deal with more background than they think they need or might already know), so I’ll just start by listing what I see as the three decisive “sample validity” problems here. I’ll then supply a bit more background—including a discussion of what Mechanical Turk is all about, and a review of how this service has been used by social scientists—before returning to the three validity issues, which I’ll then spell out in greater detail

Ready? Here are the three problems:

1.  Selection bias.  Given the types of tasks performed by MT workers, there’s good reason to suspect subjects recruited via MT differ in material ways from the people in the world whose dispositions we are interested in measuring, particularly conservative males.

2.  Prior, repeated exposure to study measures.  Many MT workers have participated multiple times in studies that use performance-based measures of cognition and have discussed among themselves what the answers are. Their scores are thus not valid.

3.  MT subjects misrepresent their nationality.  Some fraction of the MT work force participating in studies that are limited to “U.S. residents only” aren't in fact U.S. residents, thereby defeating inferences about how psychological dynamics distinctive of U.S. citizens of diverse ideologies operate. 

That’s the short answer. Now some more detail.

AWhat is MT? To start, let’s briefly review what Mechanical Turk is—and thus who the subjects in studies that use MT samples are.

Operated by Amazon.com, MT is essentially an on-line labor market.  Employers, who are known as “requesters,” post solicitations for paid work, which can be accepted by “workers,” using their own computers.

Pay is very modest: it is estimated that MT workers make about $1.50/hr.

The tasks they perform are varied: transcription, data entry, research, etc.

But MT is also a well-known instrument for engaging in on-line fraud.

MT workers get paid for writing fake product or service reviews—sometimes positive, sometimes negative, as the requester directs.

They can also garner a tiny wage for simply “clicking” on specified links in order to generate bogus web traffic at the behest of “requesters” who themselves have contracted to direct visitors to legitimate websites, who are in this case the victims of the scam.

These kinds of activities are contrary to the Amazon.com “terms of use” for MT, but that doesn’t restrain either “requesters” from soliciting “workers” or “workers” form agreeing to engage in them.

Another common MT labor assignment—one not contrary to MT rules—is the indexing of sex acts performed in internet pornography.

MT Requester solicitation for porn indexing, July 10, 2013

B. The advent of MT “study samples.” A lot of MT workers take part in social science studies.  Indeed, many workers take part in many, many such studies.

The appeal of using MT workers in one’s study is pretty obvious. They offer a reasearcher a cheap, abundant supply of eager subjects.  In addition, for studies that examine dynamics that are likely to vary across different subpopulations, the workers offer the prospect of the sort of diversity of characteristics one won’t find, say, in a sample of college students.

A while back researchers from a variety of social science disciplines published studies aimed at “validating” MT samples for research that requires use of diverse subjects drawn from the general population of the U.S. Encouragingly, these studies reported that MT samples appeared reasonably “representative” of the general population and performed in manners comparable to how one would expect members of the general public generally to perform.

On this basis, the floodgates opened, and journals of all types—including ones in elite journals—began to publish studies based on MT samples.

To be honest, I find the rapidity of the decision of these journals to embrace MT samples mystifying.  

Even taking the initial studies purporting to find MT samples “representative” at face value, the fact remains that Amazon is not in the business of supplying valid social science research samples.  It is in the business (in this setting) of brokering on-line labor contracts. To satisfy the booming demand for such services, it is constantly enrolling new “workers.”  As it enlarges its MT workforce, Amazon does nothing—zip—to assure that the characteristics of its “workers” won’t change in ways that make them unsuited for social science research.

In any case, the original papers—which reflect data that are now several years old—certainly can’t be viewed as conferring a “life time” certification of  validity on MT samples.  If journals care about sample validity, they need to insist on up-to-date evidence that MT samples support valid inferences relating to the matters under investigation.

The most recently collected evidence—in particular Chandler, Mueller, Paolacci (in press) [actually, now published!] & Shapiro, Chandler & Mueller (2013)—doesn’t justify that conclusion.  On the contrary, it shows very convincingly that MT samples are invalid, at least for studies of individual differences in cognition and their effect on political conflict in the U.S.

C.  Three major defects MT samples for the study of culturally/ideological motivated reasoning

1.  Selection bias

Whatever might have been true in 2010,  it is clear that the MT workforce today is not a picture of America.

MT workers are “diverse,” but are variously over- and under-representative of lots of groups.

Like men: researchers can end up with a sample that is 62% female.

African Americans are also substantially under-represented: 5% rather than the 12% they make up in the general population.

There are other differences too but the one that is of most concern to me—because the question I’m trying to answer is whether MT samples are valid for study of cultural cognition and like forms of ideologically motivated reasoningis that MT grossly underrepresents individuals who identify themselves as “conservatives.”

This is clear in the frequencies that researchers relying on MT samples report. In Pennycook et al. (2012),  e.g., 53% of the subjects in their sample self-identified as liberal and 25% identified as conservative.  Stratified national surveys (from the same time as this study) suggest that approximately 20% of the general population self-identifies as liberal and 40% as conservative.

In addition to how they “identify” themselves, MT worker samples don’t behave like ones that consisted of ordinary U.S. conservatives (a point that will take on more significance when I return to their falsification of their nationality).  In an 2012 Election Day survey, Richey & Taylor (2012)  report that “73% of these MTurk workers voted for Obama, 15% for Romney, and 12% for ‘Other’ ” (this assumes we can believe they were eligible to vote in the U.S. & did; I’ll get to this).

But the reason to worry about the underrepresentation of conservatives in MT samples is not simply that the samples are ideologically “unrepresentative” of the general population.  If that were the only issue, one could simply oversample conservatives when doing MT studies (as I’ve seen at least some authors do).

The problem is what the underrepresentation of conservatives implies about the selection of individuals into the MT worker “sample.” There’s  something about being part of the MT workforce, obviously, that is making it less appealing to conservatives.

Maybe conservatives are more affluent and don’t want to work for $1.50/hr.

Or maybe they are more likely to have qualms about writing fake product reviews or watching hours of porn and indexing various sex acts. After all,  Jonathan Haidt & others have found that conservatives have more acute  disgust sensibilities than liberals.

But in any case, since we know that conservatives by and large are reticent to join the MT workforce, we also can infer there is something different about the conservatives who do sign up from the ones who don’t.

What's different about them, moreover, might well be causing them to respond differently in studies from how ordinary conservatives in the U.S. population would.  There must be if we consider how many of them claim to have voted for Obama or a third-party candidate in the 2012 election!

If they are less partisan, then, they might not demonstrate as strong a motivated reasoning effect as ordinary conservatives would.

Alternatively, their decision to join the MT workforce might mean they are less reflective than ordinary conservatives and are thus failing to ponder the incongruity between indexing porn, say, and their political values.

For all these reasons, if one is interested in learning about how dispositions to engage in systematic information  processing are affected by ideology, one just can’t be sure that what we see in “MT conservatives” will generalize to the real-world population of conservatives.

I’ve seen one study based on an MT sample that reports a negative correlation between “conservativism” and scores on the Cognitive Reflection Test, the premier measure of the disposition to engage in conscious, effortful assessment of evidence—slow, “System 2” in Kahneman’s terms—as opposed the rapid, heuristic-driven, error-prone evidence neglectful sort (“System 1”).

That was the study based on the particular MT sample I mentioned as grossly overrepresenting liberals and underrepresenting conservatives.

I’ve collected data on CRT and ideology in multiple general population surveys—ones that were designed to and did generate nationally representative panels by using recruitment and stratification methods validated by the accuracy of surveys using them to predict national election results. I consistently find no correlation between ideology and CRT.

In short, the nature of the MT workforce—what it does, how it is assembled, and what it ends up generating—makes me worry that the underrepresentation of conservatives reflects a form of selection bias relative to the sort of individual differences in cognition that I’m trying to measure.

That risk is too big for me to accept in my own research, and even if it weren't, I'd expect it to be too big for many consumers of my work to accept were they made aware of the problem I'm identifying. 

BTW, the only other study I’ve ever seen that reports a negative correlation between conservativism and CRT also had serious selection bias issues.  That study used subjects enticed to participate in an experiment at an internet site that is targeted to members of the public interested in moral psychology. As an incentive to participate in the study, researchers promised to tell the subjects what their study results indicated about their cognitive style. One might think that such a site, and such an incentive, would appeal only to highly reflective people, and indeed the mean CRT scores reported for study participants (liberals, conservatives, and libertarians) rivaled or exceeded the ones attained by students at elite universities and were (for all ideological groups) much higher than those typically attained by members of the general public.   As a colleague put it, purporting to infer how different subgroups will score on the CRT from such a sample is the equivalent of a researcher reporting that “women like football as much as men” based on a sample of visitors to ESPN.com!

2. Pre- & multiple-exposure to cognitive performance measures

Again, Amazon.com isn’t in the business of furnishing valid study samples.  One of the things that firms that are in that business do is keep track of what studies subjects they recruit have participated in so that researchers won’t be testing people repeatedly with measures that don’t generate reliable results in subjects who’ve already been exposed to them.

The Cognitive Reflection Test fits that description.  It involves three questions, each of which seems to have an obvious answer that is in fact wrong; people disposed to search for and reflect on evidence that contradicts their intuitions are more likely to get those answers right.

But even the most unreflective, visceral thinker is likely to figure out the answers eventually, if he or she sees the questions over & over. 

That’s what happens on M Turk.  Subjects are repeatedly recruited to participate in studies on cognition that use the CRT and similar test of cognitive style.

What’s more they talk about the answers to such tests with each other.  MT workers have on-line “hangouts” where they share tips and experiences.  One of things they like to talk about are the answers to the CRT.  Another is why researchers keep administering an “intelligence test” (that’s how they interpret the CRT, not unreasonably) that we clearly know the answers to?

These facts have been documented by Chandler, Mueller, and Paolacci in an article in press [now out--hurry & get yours before news stand sells out!] in Behavior Research Methods.

Not surprisingly, MT workers achieve highly unrealistic scores on the CRT, ones comparable to those recorded among students at elite universities and far above those typically reported for general population samples.

Other standard measures relating to moral reasoning style--like the famous "trolley problem"--also get administered to and answered by the same MT subjects over & over, and discussed by them in chat forums.  I'm guessing that's none to good for the reliablility/validity of responses to those measures either.

As Chandler, Mueller, Paolacci note, 

There exists a sub-population of extremely productive workers which is disproportionately likely to appear in research studies. As a result, knowledge of some popular experimental designs has saturated the population of those who quickly respond to research HITs; further, workers who read discussion blogs pay attention to requester reputation and follow the HITs of favored requesters, leading individual researchers to collect fans who will undoubtedly become familiar with their specific research topics.

There’s nothing that an individual researcher can effectively do to counteract this problem.  He or she can’t ask Amazon for help: again, it isn’t a survey firm and doesn’t give a shit whether its workforce is fit for participation in social science studies.

The researcher can, of course, ask prospective MT “subjects” to certify that they haven’t seen the CRT questions previously.  But there is a high probability that the workers—who know that their eligibility to participate as a paid study subject requires such certification—will lie.

MT workers have unique id numbers.  Researchers have told me that they have seen plenty of MT workers who say they haven’t taken the CRT before but who in fact have—in those researchers’ own studies.  In such cases, they simply remove the untruthful subject from their dataset.

But these and other researchers have no way to know how many of the workers they’ve never themselves tested before are lying too when they claim to be one of the shrinking number of MT workers who have never been exposed to the CRT. 

So researchers who collect data on performance-based cognition measures from MT workers really have no way to be sure  that these very high-scoring subjects are genuinely super reflective or just super dishonest.

I sure wouldn’t use take a risk like this in my own research.  And I’m also not inclined to take the risk of being misled by relying on studies of searchers who have disregarded it in reporting how scores on CRT or other cognitive performance measures relate to ideology (or religion or any other individual difference of interest). 

3. Misrepresentation of nationality (I know who these guys are; but who are MT workers? I mean—really?)

Last but by no means least: Studies based on MT samples don’t support valid inferences about the interaction of ideology and cognition in polarizing U.S. policy debates because it’s clear that some fraction of the MT subjects who claim to be from the U.S. when they contract to participate in a study aren’t really from the United States.

This is a finding from Shapiro, Chandler and Muller (2013), who in a survey determined that a “substantial” proportion of the MT workers who are “hired” for studies with “US only” eligibility are in fact participating in them via foreign internet-service providers.  

I also know of cases in which researchers have detected MT subjects using Indian IP addresses participating in their "US only" studies. 

Amazon requires MT workers to register their nationality when joining the MT labor force. But because MT workers recognize that some “requesters” attach “US worker only” eligibility criteria to their labor requests, MT workers from other countries—primarily India, the second largest source of MT labor outside the U.S.—have an incentive to misrepresent their nationality. 

I'm not sure how easy this is to pull off since Amazon now requires US citizens to supply Social Security numbers and non-US citizens who reside in the US to supply comparable information relevant to tax collection.

But it clearly isn't impossible for determined, internet-savvy and less-than-honest people to do. 

Part of pulling off the impersonation of a US resident involves signing up for MT through an account at a firm that uses a VPN to issue US IP addresses to internet users outside the U.S.  Indeed, aspiring non-US MT workers have an even bigger incentive to do that now because Amazon, in response to fraudulent use of its services, no longer enrolls new non-US workers into the MT labor force.

Shapiro, Chandler & Muller recommend checking the IP addresses of subjects in “US only” studies and removing from the sample those whose IP addresses showed they participated from India or another country.

But this is not a very satisfying suggestion.  Just as MT workers can use a VPN to misrepresent themselves as U.S.-residents when they initially enroll in MT, so they can use a VPN to disguise the location from which they are participating in U.S.-only studies. 

Why wouldn’t they? If they didn’t lie, they might not be eligible to “work” as a study subjects--or work period if they signed up after the period in which Amazon stopped enrolling non-US workers. 

True, lying is dishonest.  But so are a great many of the things that MT workers routinely do for paying MT requesters.

Charmingly, Shapiro, Chandler and Muller (2013) also found that MT subjects, who are notorious for performing MT tasks at the office when they are supposed to be working, score high on a standard measure of the disposition to engage in “malingering.”

That’s a finding I have complete confidence in. Remember, samples that are not “valid” for studying certain types of dynamics can still be perfectly valid for studying others.

* * * *

The name for Amazon’s “Mechanical Turk” service comes from a historical episode in the late 18th century in which a con artist duped amazed members of the public into paying him a small fee for the chance to play chess against “the Turk”—a large, turban-wearing, pipe-smoking manikin who appeared to be spontaneously moving his own pieces with his mechanized arm and hand.

The profitable ruse went on for decades, until finally, in the 1820s, it was discovered that the “Turk” was being operated by a human chess player hidden underneath its boxy chassis.

Today social scientists are lining up to pay a small fee—precisely because it is so much smaller than what it costs to recruit valid general population sample—to collect data on Amazon’s “Mechanical Turk.”

But if the prying open of the box reveals that the subjects performing the truly astonishing feats of cognition being observed in these researchers’ studies are “malingering” college students in Mumbai posing as  “U.S. Democrats” and “Republicans” in between jobs writing bogus product reviews and cataloging sex acts in on-line porn clips, I suspect these researchers will feel more foolish than anyone who paid to play chess with the original “Turk.”

Some references

Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2011). Using Mechanical Turk as a subject recruitment tool for experimental research. Political Analysis, 20(3), 351-368. 

Chandler, J., Mueller, P., & Paolacci, G. Methodological Concerns and Advanced Uses of Crowdsourcing in Psychological Research (in press) Behavior Research Methods.

Experimental Turk: a blog on social science experiments on Amazon Mechanical Turk

Mueller, Chandler & Paolacci, Advanced uses of Mechanical Turk in psychological research, presentation at Society for Personality & Social Psychology, Jan. 28, 2012.

Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (2012). Analytic cognitive style predicts religious and paranormal belief. [doi: 10.1016/j.cognition.2012.03.003]. Cognition, 123(3), 335-346.

Richey, S,., & Taylor, B. How Representatives Are Amazon Mechanical Turk Workers? The Monkey Cage,(2012).

Shapiro, D. N., Chandler, J., & Mueller, P. A. (2013). Using Mechanical Turk to Study Clinical Populations. Clinical Psychological Science. doi: 10.1177/2167702612469015

 

Monday
Jul082013

What's a "valid" sample? Problems with Mechanical Turk study samples, part 1

It’s commonplace nowadays to see published psychology studies based on samples consisting of “workers” hired to participate in them via Amazon’s “Mechanical Turk,” a proprietary system that enables Amazon to collect a fee for brokering on-line employment relationships.

I’ve been trying to figure out for a while now what I think about this practice.

After considerable reading and thinking, I’ve concluded that “MT” samples are in fact a horribly defective basis for the study of the dynamics I myself am primarily interested in—namely, ones relating to how differences in group commitments interact with the cognitive processes that generate cultural or political polarization over societal risks and other facts that admit of scientific study.

I’m going to explain why, and in two posts.  To lay the groundwork for my assessment of the flaws in MT samples, this post will set out a very basic account of how to think about the “validity” of psychology samples generally.

Sometimes people hold forth on this as if sample validity were some disembodied essence that could be identified and assessed independently of the purpose of conducting a study. They say things like, “That study isn’t any good—it’s based on college students!” or make complex mathematics-pervaded arguments about “probability based stratification” of general population samples and so forth.

The reason to make empirical observations is to generate evidence that gives us more reason or less than we otherwise would have had to believe some proposition or set of propositions (the ones featured in the study hypotheses) about how the world works.

The validity of a study sample, then, depends entirely on whether it can support inferences of that sort. 

Imagine someone is studying some mental operation that he or she has reason to think is common to all people everywhere—say, “perceptual continuity,” which involves the sort of virtual, expectation-based processing of sensory stimuli that makes people shockingly oblivious to what seem like shockingly obvious but unexpected phenomena, like the sudden appearance of a gorilla among a group of basketball players or the sudden substitution of one person for another during a conversation between strangers.

Again, on the researcher's best understanding of the mechanisms involved, everyone everywhere is subject to this sort of effect, which reflects processes that are in effect “hard wired” and invariant.  If that’s so, then pretty much any group of people—so long as they haven’t suffered some sort of trauma that might change the operation of the relevant mental processes—will do.

So if a reasearcher wants to test whether a particular intervention—like telling people about this phenomenon—will help to counteract it, he or she can go ahead and test it on any group of normal people that researcher happens to have ready access to—like college undergraduates.

But now imagine that one is studying a phenomenon that one has good reason to believe will generate systematic differences among individuals identified with reference to certain specific characteristics. 

That’s true of “cultural cognition” and like forms of motivated reasoning that figure in the tendency of people to fit their assessments of information—from scientific “data” to expository arguments to the positions of putative experts to (again!) their own sense impressions—to positions on risk and like facts that dominate among members of their group.

Because the phenomenon involves individual differences, a sample that doesn’t contain the sorts of individuals who differ in the relevant respects won’t support reliable inferences.

E.g., there’s a decent amount of evidence that white males with hierarchic and individualistic values (or with “conservative” political orientations; cultural values and measures of political ideology or party affiliation are merely alternative indicators of the same latent disposition, although I happen to think cultural worldviews tend to work better) are motivated to be highly skeptical of environmental and technological risks. Such risk claims, this work suggests, are psychically threatening to such individuals, because their status and authority in society tends to be bound up with commercial and industrial activities that are being identified as dangerous, and worthy of regulation.

If one wants to investigate how a particular way of “framing” information might dissipate dismissiveness and promote open-minded engagement with evidence on climate change, then it makes no sense to test such a hypothesis on, say, predominantly female undergraduates attending a liberal east-coast university.  How they respond to the messages in question won’t generate reliable inferences about how white, hierarchical individualistic males will—and they are the group in the world that we have reason to believe is reacting in the most dismissive way to scientific evidence on climate change.

Obviously, this account of “sample validity” depends on one being right when one thinks one has “good reason to know” that the dynamics of interest are uniform across people or vary in specific ways across subpopulations of them.

But there’s no getting around that! If one uses a “representative” general population sample to study a phenomenon that in fact varies systematically across subpopulations, then the inferences one draws will also be faulty, unless one both tests for such individual differences and assures that the sample contains a sufficiently large number of the subpopulation members to enable detection of such effects. Indeed, to assure that there are enough of members of the subpopulations--particularly if one of them is small, like, say, a racial minority--is to oversample, generating a nonrepresentative sample!

The point is that the validity of a sample depends on its suitability for the inferences to be drawn about the dynamics in question.  That feature of a sample can’t be determined in the abstract, according to any set of mechanical criteria.  Rather it has to be assessed in a case-specific way, with the exercise of judgment. 

And like anything of that sort—or just anything that one investigates empirically—the conclusions one reaches will need to be treated as provisional insofar as later on someone might come along and show that the dynamics in question involved some feature that evaded detection with one’s sample, and thus undermines the inferences one drew.  Hey, that's just the way science works!

Maybe on this account Mechanical Turk samples are “valid” for studying some things.   

But I’m convinced they are not valid for the study of how cultural or ideological commitments influence motivated cognition: because of several problematic features of such samples, one cannot reliably infer from studies based on them how this dynamic will operate in the real world.

I’ll identify those problematic features of MT samples in part two of this series.

 

Friday
Jul052013

No one is afraid but we can still learn a lot from studying nanotechnology risk perceptions

No one is afraid of nanotechnology.

And at this point, no one any longer seems to be afraid that people will become afraid of nanotechnology.

There used to be a lot of anxiety about that.  Federal research agencies, foundations, and private industry all supported studies aimed at predicting how the public would react to nanotechnology—and how unreasoning fear, or self-reinforcing dynamics of political polarization, might be averted through adroit communication strategies. The worry was that nanotechnology might follow the path of nuclear power or genetically modified foods in Europe.

The concern seemed reasonable.  Indeed, CCP did a study, which found that when individuals were exposed to balanced information on nanotechnology’s potential risks and benefits, they polarized along lines that reflected their cultural predispositions toward other environmental and technological risks, such as climate change and nuclear power.

But nanotechnology has been around for many years now, and nothing of any consequence has happened. The public remains largely oblivious—neither concerned in general nor polarized.

To illustrate, consider some data collected from a large, nationally representative sample in April.  Nanotechnology is a risk-perception blip compared to climate change and nuclear power.

The public not only has failed to become anxious about nanotechnology. It still hasn't really noticed that nanotechnology exists.  Surveys continue to show that very few people say they have heard about or know much of anything about it.

Maybe things will still “heat up.”  But I’d be surprised at this point. Very surprised.

That doesn’t mean I think it was a waste for researchers to have studied public reactions to nanotechnology. 

On the contrary, I think the self-conscious effort to try to forecast its possible risk-perception trajectories—for the purpose, if possible, of guiding it away from influences inimical to reasoned and constructive engagement of the best available scientific evidence—was a model one, well worth emulating for emerging technologies in general.

The number of putative risk sources that are amenable to cultural polarization will always exceed by a large margin the number that actually do generate polarization. But because the public welfare costs of such conflict are so high, it makes sense to try to learn what influences cause emerging technologies to become suffosed with this pathology, and what sorts of steps can be taken to steer them clear of it.

Still, it does seem to me that at this point the question is not so much “whether” nanotechnology will become suffused in controversy but why exactly it didn’t.  Researchers who are continuing to focus on nanotechnology should try to figure that out—by, say, taking a close look at the career of nanotechnology in public discourse and comparing it with various other technologies, both ones that have generated high degrees of concern and controversy and ones that haven’t.

The only valid way to learn about causation is to examine carefully both the occurrence and non-occurrence of events of interest.

 Now, here’s a conjecture to get things started.

Although most people still haven’t heard of nanotechnology, my own casual observations suggest that if one asks people to “complete the phrase” that begins “nano. . .,” the most likely response (after “huh?”) is “Ipod.”

I suspect Apple has immunized nanotechnology from controversy by infusing it with the positive connotation evoked, more or less universally, by its more or less universally beloved entertainment technology.

Can this outcome—which surely was a matter of sheer happenstance—be consciously directed in the future?

Today there is widespread concern that synthetic biology will be the next “nuclear power” or “GMO” in risk-perception terms.

To forestall this, I propose—the Synbio iPad!

Using the engineering techniques associated with synthetic biology, researchers have in fact created a form of e coli capable of solving complicated math problems.

So, just fuse some e coli with the processor of the iPad 4 or 5 or 9 or whatever we are up to—and voilà: as the tide of public infatuation rises, good will will spill over onto synthetic bio (“synbio . . . iPad!”), immunizing it form mindless contentiousness that has infected climate change, GM foods (in Europe), and nuclear power (everywhere but in France) etc!

(Actually, e coli are the rock stars of synbio—being taught how to do all sorts of astonishing things, including how to fill the air with pleasant fragrances; a very nice turnaround for an organism that has borne a retched stigma since the beginning of human understanding of microbial life.)

As I said, merely a conjecture.

But inspired insight of this sort is among the predictable public-welfare returns on investments in scientific  risk-perception forecasting.

Tuesday
Jul022013

Does communicating research on public polarization polarize the public?

One of the things that makes this blog so astonishingly popular (we recently broke through the 14 billion unique daily readers ceiling!) is its relentless topicality.

Well, just yesterday, world famous  world class USA Today science journalist Dan Vergano published an amazingly informative story on research into the psychology of public conflict over climate change—and today we present a guest post from the same Dan Vergano on what it’s like to write about the psychology of public conflict over climate change!

DV addresses the challenges of communicating information on polarization to a polarized general public.  Is effective communication of scientific research on this topic constrained by the same dynamics that account for polarization? Does trying to explain the phenomeon of cultural polarization itself polarize citizens?

I’m sure there will be consensus among this site's 14+ billion regular readers that these are fascinating and difficult questions, and that DV’s insights are penetrating.

I'veadd my own reflections on the experience of communicating work like mine is to the public. I anticipate the usual dissensus among site commentators on the coherence & value of those -- indeed, I'd be disappointed by anything other than that!

 

Dan Vergano: Pole-Vaulting a polarized public?

How do you solve a problem like Dan Kahan and his polarization puzzle? I confess it worries me. How, for example, do I write about his finding that conservative-minded men view risks in a way poles apart from other people without  feeding into that very same polarization? And more important, how do I write about it in a way that doesn’t prevent me from doing my job?

I write news for a living. Sadly a rare thing now, I write news stories for the general reader, the average Joe, the man-or-woman on the street, the likely not-you if you are reading this post.

Continue reading Vergano

 

Dan Kahan: Enabling consensus on the sources & consequences of cultural dissensus

Dan Vergano initially asked me if I had any recommendations about the challenges of communicating the science of science communication -- & what it says about the sources of polarization --  to the polarized public, and in particular how to do this without triggering the sorts of dynamics that polarize culturally diverse citizens.   

I thought initially I’d just draw on my own experience in this regard—and realized that would be utterly unhelpful because the sort of “public” I communicate with is significantly different from the one he writes for. Indeed, I realized that what DV is up to is quite amazing and that I really wanted him to tell me & others how he pulls it off.

Continue reading Kahan

Friday
Jun282013

Decisive strike in the "asymmetry" debate?!

I've been underwater & unable to post with my normal frequency (indeed, I'm underwater b/c of posting with my normal frequency, and thus falling behind on other things!)

But here is something to consider: a new paper from Nam, Jost & van Bavel on whether "conservatives" are more prone to "cognitive dissonance avoidance" than "liberals."

But the question: does the result bear on the "asymmetry thesis" (AT)?

AT asserts that conservatives should be more disposed to ideologically motivated reasoning than liberals.

The basis for this hypothesis is the finding of Jost and other scholars who correlate ideology with self-report measures of critical thinking -- Need for Cognition, Need for Closure,  Dogmatic thinking, and other scales assessing attitudes toward complexity & uncertainty etc. -- that "conservatives" display a more closed-minded cognitive style.

I've posted 913 entries on the asymmetry thesis ( hereherehere, for examples) & also done my own study that tries to test it.

But maybe it's game over? This paper is the decisive strike?  

"Cognitive dissonance avoidance" is very much related to motivated reasoning (itself a tendency to adjust one's assessments of facts to avoid disappointing one's predispositions). And here NJV-B report data that they see as demonstrating asymmetry -- conservatives are more disposed to "cognitive dissonance avoidance," they say, than liberals.

Chris Mooney, who has done an admirable job in synthesizing the relevant literature and making it publicly accessible in his book "The Republican Brain" sees this as compelling proof in favor of AT.

Obviously, I have views.  But not time to express them right now.  And besides, my views are not usually nearly so interesting as the ones that emerge in the discussion that they are the occasion for.

So let's do an experiment: can we have an interesting discussion w/o my saying anything (other than "hey-- what about this?")?

So what do others think of this study? Game over? 

Be a relief to have the debate on AT resolve, I suppose, since researchers could then turn all their attention to more important questions, like what the American public thinks of the NSA's policy on collecting metadata!

 

Tuesday
Jun252013

UK Environment Secretary Owen Paterson wants a "constructive, well informed and evidence-led" public discussion on GM foods. Any advice?

I received a thoughtful message from Jonas Kathage, who related interesting news about public discussion of GM foods in the UK and asked for my reaction.  Since “he asked for it,” I let him have it—blasting him with  a massive barrage of verbiage.  Putting content aside, I do think the length of my responseis is an accurate measure of the importance and difficulty of the issues his query raises. I reproduce the exchange below & of course invite others to offer their advice (whether or not they have the time & patience to work through my own) on how Secretary Paterson can achieve his objective of initiating a a “constructive, well informed and evidence-led” public discussion of the risks and benefits of GM food technology.

Jonas:

UK environment secretary Owen Paterson recently delivered a speech on genetically modified crops, calling for relaxing the restrictions on their cultivation we have in Europe. The speech is broadly supportive and points to various benefits of gm crops for people and environment. The full speech is here.

Now aside from the predictable outrage among anti-gmo groups, I was struck by a piece in the Guardian that seemed happy about efforts to restart a public discussion about gm crops, but at the same time argued Paterson wasted the opportunity by following the outdated deficit model and suggesting people are stupid. Here's the Guardian piece.

I'm turning to you because you are an expert on the science of science communication. While I remember you don't consider the communication environment to be polluted in the US, I feel it's a bit different in Europe. I'm wondering whether you agree with the Guardian commentator that Paterson's speech represents a wasted opportunity. Since I couldn't get a good idea from the piece or my follow-up on twitter about how the speech could be improved, what are your thoughts?

My response:

This is a very interesting development and a nice example of the challenges that are involved in promoting engaged and constructive public interaction with decision-relevant science. Thanks for pointing the story and the Guardian piece, and also for your thoughtful framing of the issues -- without that I'm confident I wouldn't have been able to appreciate the value that reflecting on Paterson's speech presents.

I agree, the issue is about how to address a “polluted science communication environment.”  I’m sure ours—in the US—is as bad as yours.

Or really what I’m sure is that we both have problems that can be characterized this way, ones in which the ordinary, and ordinarily reliable rational faculties that ordinary people use to identify the best available decision relevant science becomes enfeebled by “toxic meanings,” which turn positions on facts into badges of group commitment and loyalty. You have a problem like that on GM foods; I don’t think we have it on that particular issue, yet. But obviously we have it. On lots of issues.

That said, I feel obliged to risk disappointing you by adopting a decidedly uncertain stance.  For two reasons, one general and one specific.  I hope I won’t wear out your patience in making you invest the time it will take to work through what I feel impelled say in order to get what I agree will likely be a modest return (more in the nature of an investment strategy rather than a return in fact).

1. To start, I think affecting a posture of certainty and confidence on "how to communicate" on issues that feature or are vulnerable to cultural polarization will quite often be a mistake.  

We do know a good amount, as a result of careful empirical study, about the dynamics that generate toxic meanings; about steps that can help neutralize them; and about strategies that can help detoxify the science communication enviornment if those steps fail, or weren't taken to begin with.  

But the sort of knowledge we have tends to be very general.  It concerns the mechanisms of consequence and how they interact with various influences.

Having that knowledge is extremely valuable, because there are many genuine mechanisms of social psychology and the like that could be playing a role, and without knowledge about which really are and how, then the likelihood that anyone will ever figure out what to do or not do (or even know whether what they did contributed to making things better or worse) will be essentially nil.  We will drown in an ocean of just-so stories.

But most of that knowledge was gleaned in studies carefully designed to bear in on the mechanisms of interest and exclude everything else.  That’s essential for one to be able to manipulate the mechanisms in revealing ways and to observe with confidence how they are responding (I’m sure you likely know all of this, so forgive me for the wind up).

But those kind of pristine settings are—by design--simplified models. The settings in which one has to act will be much more complicated.  One knows from the studies—from the models—what sorts of mechanisms it makes sense to try to engage in those settings.  But one doesn’t know precisely how.

There’s only one way to figure that out: through use of the same methods that one used to identify the mechanisms of consequence in the first place! One has to engage in empirical field studies aimed at testing hypotheses about what sorts of “communication strategies” (very broadly understood; the strategies necessary to avert or treat a polluted science communication environment will often involve things other than just uttering words) can reproduce in the world the effects one attained in the “lab.”

Indeed, if one doesn’t do that, we will simply find ourselves again drowning in stories. For just as there are more plausible accounts of why we see cultural polarization than are actually true, there are more plausible accounts about how to use the genuine insights of the science of science communication to treat that pathology.  The currency of storytelling is just as valueless, and will buy us just as little real progress, at the “how to” stage as it did at the “what’s the problem” one.

I feel very strongly about this. So at the cost of an opportunity maybe to enjoy flattering attention, when you or some other thoughtful person in the middle of a communication problem (as I gather you are) asks me, “So what do we do, given what you’ve told us you know about the psychology of cultural polarization and science communication,” I feel constrained to say w/o equivocation, “I don’t know.”

But to recover – what? maybe some semblance of dignity! but more importantly the opportunity to be of use – I then add:

You tell me! And I will help you at that point by helping you to collect the evidence that will help you to figure out if you are right.

You are the one in the middle of this real world situation. You know lots of specific, relevant things about it—much more than I (or anyone else who studies the dynamics of communication for a living) does.  I’ve told you things that are important for you to know and that can help you make informed decisions about which of the things you were thinking about doing (likely one of them is the right thing; but which one?) is likely to work.  So the likelihood that you’ll know what to do is higher than the likelihood I will if I just make a wild-ass guess.

So tell me what you think--now that you have the benefit of knowing what I do--what you think it’s possible for you to do that might produce the effects I’ve been describing to you.  Indeed, tell me four or five such things, and we’ll talk them through

Then I will again do what I am equipped to do.  I will help you set up your communication operation in a way that is suited to generating evidence that will help you assess whether the things it occurred to you might work really are working. And just as important, help you recognize why they have and haven’t—so that you can refine and adjust and extend.

Then, I’ll add, that while I’m genuinely willing to help in this way, the only condition I myself would impose on assisting is that this person agree to share what we learn from this exercise with anyone else interested in helping to promote .  Because the situation you are in, you agree, is both maddeningly familiar and bad.  If enough of the others who had been in this kind of situation had done what I’m proposing and shared the results—each building on what the other is learned—then maybe you, and me and millions of others wouldn’t still be in this situation, or in it so often, and on so many issues. . ..

2.  So—that was the general part!

On the specific.

I am going to be modest, and in an even more “local” way here.  I don’t know enough of the background to have an opinion on whether the Guardian columnist is right about Paterson.

I will say, though, that I do have an opinion—and every strong one!—that the columnist is right to be thinking along the lines reflect in his essay.

For sure there is “more to it” than just getting “the information” out.  In addition, the “more” includes things in the nature of the ones that the columnist emphasizes.  For sure, those engaged in communicating need to address those on both sides in a manner that avoids conveying any sense that those on the other side are “stupid” or “anti-science”; absurd! Absurd in the U.K., absurd in the U.S., absurd in every nation that has had the benefit of being passing over the threshold of social development that marks a society’s entry into the privileged domain of liberal market democracy.  That sort of reckless, obnoxious talk is a form of science communication pollution—or in any case, should be reserved for the serious occasions when we are looking at the real thing.

Those engaged in the “debate” also have to show that they recognize why other reasoning citizens feel differently from them.

They have to demonstrate too– by seizing every opportunity that presents itself—that they are themselves not “recommitted,” and are thus willing to take seriously claims and proposals one might have expected them to resist. Also that they unwilling to tolerate any of their own number engaging in distortions of fact.

The columnist is thinking, and demonstrating how to think clearly, abut those issues.

But I don’t know if he has cause to see Paterson’s proposal for a “debate” as insensitive to these sorts of concerns.  I just don’t know enough to know.

Also—and I’d be shocked if the columnist disagreed—while it’s foolish to carry on as if “the facts,” the “evidence,” the “science” were all there were to it (a form of presentation that evinces contempt for those with whom one disagrees; one is implying, necessarily, that they are “idiots” or “liars”), it certainly is part of what there is to it!

Indeed, it is the most important part.  We are—or at least I am—motivated by the goal of assuring that the best available decision-relevant science is actually made use of by all those whose welfare it can enhance.  The problem ofa polluted science communication enviornment is that it makes it so much harder for people to recognize what the best available decision-relevant science is and what its significance is for them.

What’s it significance is is for them to decide; as free, reasoning individuals and citizens.

But what free and reasoning person is confident he or she can reliably see what the best evidence is or what it implies given his or her values through the toxic fog of cultural recrimination that pervades issues like climate change, nuclear power, and—in the UK, I gather –GM foods?

So I think Paterson is surely right to want the UK to engage the best available information. And to want to be sure that citizens can recognize what the best available evidence is.

That’s not a goal anyone could be criticized for. The only issue involves means.

And I’m so useless, sadly, on that for you!  I don’t know whether the means contemplated by Paterson are the wrong ones.

I’m sure too, though, the columnist would agree that what the right means are of promoting informed public engagement with GM foods in the UK are is not “obvious” but rather something that requires the sort of evidence-based orientation that I described in part 1.

Well, you asked!  That’s my reaction.

And thanks again for giving me something very interesting to think about.

 

Friday
Jun212013

How religiosity and science literacy interact: Evolution & science literacy part 2

This is the second of two posts on science literacy and evolution.

And religion.

And liberal democratic society as the naturally congenial but sometimes precariously raucous—or maybe better, simultaneously congenial and precarious because naturally raucous—home for science.

And how the common misunderstanding of what public “disbelief” in “evolution” truly signifies can actually interfere with popular dissemination of scientific knowledge.  Plus compromise norms of respect for cultural pluralism that are essential to the practice of liberal democracy.

See? Get it?

Okay, well, in the last post I described the vast body of long established but persistently--weirdly--ignored work that social scientists have amassed on the relationship between public “disbelief” in evolution and public understanding of evolution and other basic elements of science.

That work shows that there  isn't any relationship. What people say they “believe” about evolution is a measure of who they are, culturally.  It’s not a measure of what they know about what’s known to science.

Indeed, many people who say they “believe” in evolution don’t have the foggiest idea how the modern synthesis hangs together. Those who say they “disbelieve” are not any less likely to understand evolutionary theory--but they aren't any more more likely to either.

That so few members of the public have a meaningful understanding of the workings of genetic variance, random mutation, and natural selection (the core elements of the modern synthesis) is a shame, and definitely a matter of concern for the teaching of science education.

But it’s a problem about what people “know” and not what they say they “believe.” What people say they "believe" and what they "know" about evolution are vastly different things. That's what the ample scientific evidence on public understandings of science show.

In this post I want to add a modest increment of additional evidence corroborating this important point.

The evidence has to do specifically with the relationship between religion, science literacy, and belief in evolution.

The evidence is from a survey of 2,000 US adults recruited and stratified in a manner designed to assure national representativeness. 

The survey instrument included the NSF science indicators.

It also contained various measures of religiosity, including regularity of church attendance; regularity of prayer; and perceived “importance of God” in one’s life. These cohered in a manner that enabled them to be formed into a reliable “religiosity” scale.

And the survey contained an item that Gallup and other pollsters routinely use to measure the public’s “beliefs” about evolution.

What do these data show?

Well, I’ll state in summary form what I regard as the findings of interest, and then supply the supporting details:

1.   Neither the “Evolution” nor the “Big Bang” items in the NSF’s "Science Indicators" battery can plausibly be viewed as reliably measuring “scientific literacy” in subjects who are even modestly religious.

2. When subjects who are highly science literate but highly religious answer “False” to the NSF Indicator’s Evolution item, their response furnishes no reason to infer that they lack knowledge of the basic elements of the best scientific understanding of evolution.

3. For respondents who are below average in religiosity, a high score in “science literacy” predicts a higher probability of “believing” in “Naturalistic Evolution”—and so does a low score!

4. For those who are above average in religiosity, a high score in science literacy doesn’t predict a higher probability of believing in Naturalistic Evolution. But it does predict a higher probability of believing in Theistic Evolution.

5.  A higher score in science literacy predicts a lower probability of believing in Young Earth Creationism—whether respondents are below or above average in religiosity.

Okay. Here are the specifics.

1. In general, religiosity (measured, as I said, by aggregating items on church attendance, frequency of prayer, and perceived personal importance of God) is correlated negatively with science literacy.

But the effect is modest. The large overlap in the density distribution plots to the left makes it clear that the portions of population “above” and “below average” in religiosity (“AARs” and “BARs,” let’s call them) both comprise individuals of a wide range of scores on the NSF science literacy battery.

Or at least they do when one leaves Evolution and Big Bang out of the tally, as the NSF itself decided to do in 2010, and & as I have here. To make the science literacy scale more reliable and discerning, I’ve added items from the Indicators' “science process” battery, which tests knowledge relating to probability and validity of experimental methods.

Consider, though, how AARs and BARs scoring in the top 50% of the science literacy test so measured respond to Evolution and Big Bang:

The difference in the percentages of the two moderately “science literate” groups who answer “true” to these questions is stunningly high. 

Now one can use even more intricate statistical tests—ones involving, say, Cronbach’s alpha, factor analysis, and structural equation modeling—to convincingly show that Evolution and Big Bang are not measuring the same latent proficiency in acquiring scientific knowledge as are the remaining NSF Indicator items. 

But nothing more intricate than this discrepancy in the performance of modestly science literate AARs and BARs is necessary to see that these two items aren’t a valid measure of science literacy in the former.

2. The NSF Indicators test of science literacy is far from perfect, but I think it’s reasonable to infer that people who do above average have acquired more understanding of basic science knowledge than those who score below average.

I doubt that a majority of BARs who score in the top 50% of the NSF Indicator battery (sans Evolution and Big Bang and avec the process items) know the basic elements of the theory of evolution, including the role that genetic variance, random mutation, and natural selection play in it. 

But I think more of them are likely to understand those things than BARs who score in the bottom 50%.

By the same token, there’s reason to believe that AARs who score in the top 50% on the NSF science literacy test are more likely to have acquired an elementary knowledge of evolutionary theory than those—BARs or AARs—who score in the bottom 50%.   

Nothing in how the above-average science literacy AARs answer the Evolution item furnishes any reason to doubt this. How they respond to that item, I’ve just pointed out, is not, for them at least, a measure of what they know about science.  And in any case, as has been established by researchers on multiple occasions, there’s zero correlation between whether one says one “believes in” evolution and whether can give a passable account of the modern synthesis.

3. Now let’s consider what we can learn from the responses to the “popular opinion poll” item on beliefs in evolution.

That item asks respondents to indicate “which one of the following statements comes closest to your views on the origin and development of human beings—” 

  • Humans developed over millions of years from less advanced forms of life, but God guided this process
  • Human beings have developed over millions of years from less advanced forms of life, but God had no part in this process; or
  • God created human beings pretty much in their present form at one time within the last 10,000 years or so." 

Let’s call these responses “Theistic Evolution,” “Naturalistic Evolution,” and "Young Earth Creationism," respectively.

Theistic Evolution was the most popular response but by was supported by only a plurality (38%). Young Earth Creationism was second and Naturalistic (or "Godless") Evolution third but the proportions who selected each differed by only a slight amount (32% vs. 29%, respectively).

These numbers, by the way, differ a bit from what Gallup tends to report. The percent selecting Theistic Evolution is in consistent with that. But Godless Evolution runs closer to Young Earth Creationism than it does in Gallup polls.

What to make of this? Well, I’ll write a blog soon about the validity of on-line public opinion samples. But suffice it to say that based on the predictive accuracy of surveys conducted by YouGov, the premier on-line survey firm that recruited the sample for this study, and surveys conducted by Gallup in the 2010 and 2012 elections, YouGov is probably getting closer to the “true” general population values.

What we are interested in, though, is how science literacy and religiosity influence selection of these responses.

Consider first the relationship between these responses & science literacy.

Whoa ... the Jesus fish symbol popped out of my regression!

Maybe not shocking but note that support for Naturalistic peaks at only about 55% even among the most science literate. The relationship between support and for that position and science literacy, moreover, is “U”-shaped—higher at both the low and high ends. This relationship was confirmed by a multinomial logistic regression with appropriate quadratic terms; the fitted values from that regression are what I’m graphing (these plots are very true to what one would see in the “raw” data).

Now add religiosity. The following plots contrast the probabilities that AARs and BARs will select one or another of the response to the popular pollster item. They are derived from the same multinomial logistic regression, which confirmed that the impact of science literacy on the probability of selecting one response or another varies depending on level of religiosity.

It’s clear that the “U”-shaped relationship between science literacy and believing in Naturalistic Evolution is being driven by BARs.

In other words, BARs are more likely to believe in Naturalistic Evolution as they become either extremely science literate or extremely science illiterate!

Is this a surprise? Well, I wasn’t expecting this. My inspection of the data was pretty much exploratory, without strong hypotheses.

But I was reminded of a finding in what I regard as one of the very best studies of how high-quality instruction in the teaching of evolutionary theory generates improvements in knowledge but not changes in belief

In the study, Anton Lawson and collaborators found that high school students, particularly those scoring highest in critical reasoning skills, readily acquired knowledge of various aspects of evolution through instruction, but that acquisition of such knowledge did not produce a corresponding shift in belief among the students who began as nonbelievers.  

Nevertheless, the subgroup of such students who did back away from two particular beliefs hostile to naturalistic evolution (that the “living world is controlled by a force greater than humans” and that “all events in nature occur as part of a predetermined master plan”) consisted of the students who scored the lowest in critical reasoning skills. 

Speculating on why, Lawson et al. noted that “experience tells us that people change their beliefs for other than rational reasons. For example, hearing the opinion of an acknowledged authority figure could cause one to change a belief. Perhaps intuitive [students] are more likely than reflective students to change their beliefs for this reason.”

Lawson et al. don’t themselves explicitly suggest this, but a consistent conjecture might be that students who are higher in critical reasoning skills might be more inclined to push back on identity-threatening “beliefs” (even while taking on more knowledge) than those who are less reflective. That would be consistent with findings that motivated reasoning can be amplified by science literacy and cognitive reflection.

Someone should do a study to test that hypothesis!

4.  For AARs, in contrast, an increase in science literacy does not predict belief in Naturalistic Evolution. On the contrary, it seems to predict a slight decrease, although the effect is pretty much zero for all but those AARs whose scores are quite low.

So much for the idea that “disbelief” in evolution is a sign of low science literacy.  It isn’t.  “Disbelief” is just as consistent with being high in science literacy as low.

The only thing “disbelief” in Naturalistic Evolution reliably signifies is that one is religious.  This is consistent with the hypothesis that evolution “beliefs” are actually measures of cultural identity (as reflected in religiosity).

This conclusion is strongly corroborated by the relationship between science literacy and the increased probability of believing in Theistic Evolution among AARs. Offered the opportunity—as they aren’t in the NSF Science Indicators science knowledge battery—to select a position simultaneously consistent with “belief” in evolution and religious identity, the most science literate AARS grab hold of it!

5. Indeed, those same subjects—AARs who score high in science literacy—are less likely to espouse Young Earth Creationism than their less science literate counterparts.

What does this tell us? I suppose other interpretations are possible, but I’d say that AARs high in science literacy are in fact eager to affirm their “belief” in evolution, so long as they can be presented with a means of doing so that doesn’t denigrate their cultural identities.

Not surprisingly, BARs also less likely to express support for Young Earth Creationism as they become more science literate.

Support for Young Earth Creationism is associated disproportionately with being simultaneously above average in religiosity and below average in science literacy.

* * * * *

Some concluding thoughts:

1. “Disbelief” in evolution doesn’t reflect a deficiency in science literacy or shortcomings in science education in our society.  

I think it is very reasonable to think members of our society are not as science literate as they should be, and also that our education system must do better in imparting scientific knowledge to citizens generally. 

But it’s wrong to think that the level of “disbelief” in evolution is evidence of those things.  It’s wrong to think that because that view is contrary to empirical evidence.

The evidence that many researchers have compiled and that I’ve added to in a very modest way here show overwhelmingly that an individual's unwillingness to profess “belief” in evolution doesn't indicate science illiteracy or her unfamiliarity with the rudiments of evolutionary theory. 

It measures her expression of her cultural identity. What saying “I don’t believe in evolution” means, culturally speaking, is that one belongs to a community whose members subscribe to a particular set of understanding on best way to live.

2.  Those dedicated to the critical task of promoting scientific literacy, including public knowledge of the best scientific understanding of evolution, should not be focusing on what percentage of the population says they “believe” in evolution.

They shouldn’t be focusing on that because that information tells us nothing about how much scientific knowledge or even knowledge of evolution the public has.  Those who want to test how well society is doing in imparting knowledge of evolution should be measuring instead what fraction of the population can give a cogent account of genetic variance, random mutation, and natural selection. It’s pitifully small, among both those who say they “believe” in evolution and those who say they don’t.

But even more important, those who want to promote public acquisition of scientific knowledge should avoid making professions of “belief” in evolution their aim because doing so is much more likely to deter than promote acquisition of basic scientific knowledge.

People who have a religious identity—who include plenty of science literate people and people capable of becoming even more so—see profession of “belief” as denigrating their cultural identities.  Naturally, then, they will see the demand that they not only learn but publicly affirm their "belief” in evolution as an attack on their community by members of another who harbor a shared understanding of the best life hostile to theirs.

They’ll resent that.  And with good reason. It's appropriate--absolutely essential, even--that a liberal democracy oblige those who furnish the public good of education to impart to people of all cultural identities the best available understanding of how the universe works, including the career of life on earth.  But citizens who make it their business to force others who have cultural views different from theirs to submit to purely symbolic rituals of identity-abnegation are engaged in a noxious, fundamentally illiberal form of conduct.

Such behavior, moreover, predictably breeds motivated resistance to acquiring knowledge of what science knows. Fear of the loss of status associated with "assenting" to facts symbolically linked to the identity of a rival cultural group is exactly what blocks citizens from converging on the best scientific evidence on issues climate change, nuclear power, the HPV vaccine, and other culturally contested policies.

In their study of how effectively imparting knowledge of evolutionary theory does not produce “belief,” Anton Lawson & William Worsnop conclude:

Of course, every teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to do so, not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern-a pattern that appears to be necessary for independent learning and critical thought.

This is a sensible prescription for those who (very appropriately!) want to promote the widest dissemination of basic science knowledge in the general public.

But it also happens to be a prescription consistent with the basic liberal injunction to respect the entitlement of individual citizens to freely use their own reason both to understand what is known by science and to decide for themselves what constitutes a virtuous life.

The convergence of the two is not any sort of accident.  It reflects a deep truth about the reciprocal affinity of science and political liberalism.

Wednesday
Jun192013

What does "disbelief" in evolution *mean*? What does "belief" in it *measure*? Evolution & science literacy part 1

The idea that popular “disbelief in evolution” indicates a deficiency in “science literacy” is one of the most oft-repeated but least defensible propositions in popular commentary on the status of science in U.S. society.

It’s true only if one makes the analytically vacuous move of defining science literacy to mean “belief in evolution.”

It’s false, however, if one is interested in understanding, as an empirical matter, either what members of the public know about what is known to science or what the social meaning of “belief” in evolution is for members of culturally diverse groups.

Ultimately, I want to offer up some original data that helps to make my meaning clear.

But let’s start with some science of science communication basics. I’d be tempted to say they are ones that bear repeating over and over and over if I didn’t recognize that the persistence of disregard for them among popular commentators can’t plausibly be explained by the failure of those who have made or who are familiar with these findings to point them out time and again.

I start with these well-established findings, then, just so it will be clear what I see as the modest increment of corroboration and refinement to be added with the new data I'll describe.

Getting clear on what’s already known is what I’ll do in this post, which is part 1 of a 2-part series on evolution, ordinary science intelligence, religion, and (ultimately) how all of these are intertwined with the central constitutional difficulty of the Liberal Republic of Science. Part 2 is where I’ll get to the original data.

First, “believing in evolution” is not the same as “understanding” or even having the most rudimentary knowledge of science knows about the career of life on our planet. Believing and understanding are in fact wholly uncorrelated.

That is, those who say they “believe” in evolution are no more likely to be able to give a passable—as in high school biology passing grade—account of “natural selection,” “random mutation,” and “genetic variation” (the basic elements of the “modern synthesis” in evolutionary theory) than whose who “disbelieve.” Indeed, few people can.

Those who “believe,” then, don’t “know” more science than “nonbelievers.” They merely accept more of what it is that science knows but that they themselves don’t understand (which, by the way, is a very sensible thing for them to do; I’ve discussed this before).

Second, being enabled to understand evolution doesn’t cause people to “believe” in it.

It’s possible—with the aid of techniques devised by excellent science educators—to teach a thoughtful person the basic elements of evolutionary theory! Everyone ought to be taught it, not only because understanding this process enlarges their knowledge of all manner of natural and social phenomena but because seeing how human beings came to understand this process furnishes an object lesson in the awe inspiring-power of human beings to acquire genuine knowledge by applying their reason to observation.

But acquiring an understanding of evolution—that is, a meaningful comprehension of how the ferment of genetic variance and random mutation when leavened with natural selection endows all manner of life forms with a vital quality of self-reforming resilience—doesn’t make someone who before that time said they “disbelieved” evolution now say they “believe” it.

Empirical studies—ones with high school and university students—have shown this multiple times. Believe it or not. But if not, you are the one who closing your mind to insight generated by the application of human reason to observation.

Third, what people say they “believe” about evolution doesn’t reliably predict how much they know about science generally.

This is one of the lessons learned from use of the National Science Indicators.  

The Indicators, which comprise a wide-ranging longitudinal survey of public knowledge, attitudes, and practices, offer a monumentally useful font of knowledge for the study of science and society. Indeed, they are a monument to the insight and public spirit of the scientists (including the scientist administrators inside the NSF) who created and continue to administer it.

Integral the the Indicators is a measure of “science literacy” that has been standardly employed in the social sciences for many years. The Indicators include a “knowledge” battery—an inventory-like set of “facts” such as the decisive significance of the father’s genes in determining the sex of a child and the size of an electron relative to that of an atom.

The indicators include two true-false items, which state “human beings, as we know them today developed from earlier species of animals,” and “the universe began with a huge explosion,” respectively. Test-takers who consistently get 90+% of the remaining  questions on the NSF test correct are only slightly more than 50% likely to correctly answer these questions, which are known as “Evolution” and “Big Bang” respectively.

That tells you something, or does if you are applying reason to observation: it is that “Big Bang” and “Evolution” aren’t measuring the same thing as the remaining items. In fact, research suggests—not surprisingly—that they are measuring a latent or unobserved “religiosity” disposition that is distinct from the latent knowledge of basic science the remaining questions are measuring.

What people are doing, then, when they say they “believe” and “disbelieve” in evolution is expressing who they are. Evolution has a cultural meaning, positions on which signify membership in one or another competing group.

People reliably respond to “Evolution” and “Big Bang” in a manner that signifies their identities.  Moreover, many of the people for whom “false” correctly conveys their cultural identity know plenty of science.

Accordingly, many social scientists interested in reliably measuring how disposed members of the public are to come to know what’s known by science, particularly across place and time, have proposed dropping “Big Bang” and “Evolution" -- not from the survey regularly conducted by the NSF in compiling the Indicators, but from the scale one can form with the other items to measure what people know about what's known to science. 

This proposal has raised political hackles. How can one purport to measure science literacy and leave evolution and the big-bang theory of the origins of the universe out, they ask?  Someone who doesn’t know these things just is science illiterate!

Well, yes, if you simply define science literacy that way.  Moreover, if you do define it that way, you’ll be counting as “science literate” many people who harbor genuinely ignorant, embarrassing understandings of how evolution works.

Plus you’ll necessarily be dulling the precision of what is supposed to be an empirical measuring instrument for assessing what is known—since people who do know many many things will “say” they “don’t believe” in evolution. They'll say that even if they -- unlike the vast majority of the public who say they "believe" in evolution--are able to give an admirably cogent account of the modern synthesis.

Indeed, you’ll be converting what is supposed to be a measure of one thing—how much scientific knowledge people have acquired--into a symbol of something else: their willingness to assent to the cultural meaning that is conveyed by saying “true” to Evolution and Big Bang, as many people who do, and for that reason, without having any real comprehension of the science those items embody and without even doing very well on the remainder of the NSF Indicator battery.

Even then, the resulting “scale” won’t be a very reliable indicator of “identity,” since most of the remaining questions are ones that people whose identities are denigrated by answering “true” to Big Bang and Evolution are ones that bear no particular cultural meaning and thus don’t reliably even single out people of opposing cultural styles.

But insisting that the measure that social scientists use to study “science literacy” include Big Bang and Evolution under these circumstances will still convey a meaning.

It is that the enterprise of science is on one side of a cultural conflict between citizens whose disagreement about the best way of life in fact has nothing to do with the authority of science’s way of knowing, which in fact they all accept.

A “science literacy” test that insists that people profess “belief” in propositions that its citizens all understand to be expressions of cultural identity is really a pledge of allegiance, a loyalty oath to a partisan cultural orthodoxy.

Steadfastly insisting that the state teach its citizens what science genuinely knows  (about evolution, the origins of the universe, and myriad other things), and even more critically how science comes to know what it does, are essential to enabling culturally diverse people to attain happiness by means of their own choosing.

But insisting that they pledge allegiance to a particular cultural orthodoxy doesn't advance any of those ends.  Indeed, it subverts the very constitution of the Liberal Republic of Science.

 

Part 2.

Thursday
Jun132013

Science literacy & cultural polarization: it doesn't happen *just* with global warming, but it also doesn't happen for *all* risks. Why?

In one CCP study, we found that cultural polarization over climate change is magnified by science literacy (numeracy, too). That is, as culturally diverse members (but perfectly ordinary, and not particularly partisan) members of the public become more science literate, they don't converge on the dangers that global warming poses but rather grow even more divided.

Not what you'd expect if you thought that the source of the climate change controversy was a deficit in the public's ability to comprehend science.

But the culturally polarizing effect of science literacy isn't actually that unusual.  It's definitely not the case that all risk issues generate cultural polarization. But among those that do, division is often most intense among members of the public who are the most knowledgeable about science in general.

Actually, in the paper in which we reported the culturally polarizing effect of science literacy with respect to perceptions of climate change risks, we also reported data that showed the same phenomenon occurring with respect to perceptions of nuclear power risks.

Well, here are some more data that help to illustrate the relationship between science literacy and cultural polarization.  They come from a survey of a nationally representative sample of 2000 persons conducted in May and June of this year (that's right--even more fresh data! Mmmmmm mmmm!)




These figures illustrate how public perceptions of different risks vary in relation to science literacy. Risk perceptions were measured with the "industrial strength measure." Science literacy was assessed with the National Science Foundation's "Science Indicators," a battery of questions commonly used to measure general factual and conceptual knowledge about science. 

For each risk, I plotted (using a locally weighted regression smoother, a great device for conveying the profile of the raw data) the relationship between risk perception and science literacy for the sample as a whole (the dashed grey line) and the relationships between them for the cultural groups (whose members are identified based on their scores in relations to the means on the hierarchy-egalitarian and individualist-communitarian worldview scales) that are most polarized on the indicated risk

The upper-left panel essentially reproduces the pattern we observed and reported on in our Nature Climate Change study. Overall, science literacy has essentially impact on climate-change risk perceptions. But among egalitarian communitarians and hierarch individualists--the cultural groups who tend to agree most strongly on environmental and technological risks--science literacy has off-setting effects with respect to climate change and fracking: it makes egalitarian communitarians credit assertions of risk more, and hierarchical individualists less.

The same basic story applies to the bottom two panels. Those ones look at legalization of marijuana and legalization of prostitution, "social deviancy risks" of the sort that tend to divide hierarchical communitarians and egalitarian individualists.

Neither the level of concern nor the degree of cultural polarization is as intense as those associated with global warming and fracking. But the intensity of cultural disagreement does intensify with increasing science literacy (it seems to abate for legalization of prostitution among those highest in science litercy, although the appearance of convergence would have to be statistically interrogated before one could conclude that it is genuine).

What to make of this? Well, again, one interpretation --one supported by the study of cultural cognition generally--is that the source of cultural polarization over risk isn't plausibly attributed to a deficit in the public's knowledge or ability to comprehend science. 

Instead, it's caused by antagonistic cultural meanings that become attached to particular risks (and related facts), converting them into badges of membership in and loyalty to important affinity groups.

When that happens, the stake individuals have in maintaining their standing in their group will tend to dominate the stake they have in forming "accurate" understandings of the scientific evidence: mistakes on the latter won't increase their or anyone else's level of risk (ordinary individual's opinions are not of sufficient consequence to incrase or diminish the effects of climate change, etc); whereas being out of line with one's group can have huge, and hugely negative, consequences for people socially.

Ordinary individuals will thus attend to information about the risks in question (including, e.g., the position of "expert" scientists) in patterns that enable them to persist in the holding beliefs congruent with their cultural identities.  Individuals who enjoy a higher than average capacity to understand such information won't be immune to this effect; on the contrary, they will use their higher levels of knowledge and analytic skills to ferret out identity-supportive bits of information and defend them from attack, and thus form perceptions of risk that are even more reliably aligned with those that are characteristic of their groups.

That was the argument we made about climate change and science comprehension in our Nature Nanotechnology study.  And I think it generalizes to other culturally contested risks.

But not all socieal risks are contested. The number that are characterized by culturally antagonistic meaning is, as I've stressed before, quite small in relation to the number that generate intense cleavages of the sort that characterize climate change, nuclear power, gun control, the HPV vaccine, and (apparently now) fracking.

With respect to those issues, we shouldn't expect to see polarization generally. Nor should we expect to see it among those culturally diverse individuals who are highest in science literacy or in other qualities that reflect a higher capacity to comprehend quantitative information.

On the contrary, we should expect such individuals to be even more likely to be converging on the best scientific evidence.  They might be better able to understand such evidence themselves than people whose comprehension of science is more modest. 

But more realistically, I'd say, the reason to expect more convergence among the most science literate, most numerate, and most cognitively reflective citizens is that they are more reliably able to discern who knows what about what. 

The amount of decision-relevant science that it is valuable for citizens to make use of in their lives far exceeds the amount that they could hope to form a meaningful understanding of. Their ability to make use of such information, then, depends on the ability of people to recognize who knows what about what (even scientists need to be able to employ this form of perception and recognition for them to engage in collaborative production of knowledge within their fields).

Ordinary individuals--ones without advanced degrees in science etc. -- are ordinarily able to recognize who knows what about what without difficulty, but one would expect that those who have a refined capacity to comprehend scientific information would likely do even better.

It's the degrading or disrupting effect on this recognition capacity on citizens of ordinary and extraordinary science comprehension capacities that makes risks suffused with antagonistic meanings a source of persistent cultural dispute.

Okay, all of that is a matter of surmise and conjecture.  How about some data on the impact of science literacy on less polarizing issues.

I have to admit that I'm not as systematic as I should be -- as I think it is important for all who are studying the "science communication problem" to be -- in studying "ordinary," "boring," nonpolarizing risks.  

But consider this:

Here we see the impact of science literacy, generally and with respect to the cutural groiups (this time egalitarian communitarians and hierarch individualists) who are most "divided," on GM foods and childhood vaccination.

In fact, the division is exceedingly modest.  I think, in fact, to characterize the levels of disagreement seen here as reflecting "cultural polarization" would be extravagant.  As I've emphasized before, I see little evidence -- as opposed to casual assertions by commentators who I think should be more careful not to confuse agitation among subsegments of the population who are disposed to dramatic, noisy gestures but who are actually very small and quite remote from the attention of the ordinary, nonpolitical member of the public--that these are culturally polarizing issues in the U.S., at least for the time being.

Moreover,with respect to both issues, science literacy tends in general and among the cultural groups whose members are modestly divided to reduce concern about risk (again, a little "blip" like the one at the extreme science-literacy end of "egalitarian communitarians" in the fracking graph is almost certainly just noise-- statistically speaking; if we could find the one or two responsible survey respondents, they might in fact be unrepresentatively noisy on this issue).

That's not "smoking gun" evidence that science literacy tends to improve the public's use of decision-relevant science on societal risks for nonpolarizing issues.

For that, it would be useful to have more evidence of public opinion, on risks that provoke even less division and on which the evidence is very very clear (it is on vaccines; I am inclined, too, to believe that the evidence on GM foods suggests they pose exceedingly little risk and in fact offset myriad others, from ones associated with malnutrition to crop failure induced by climate-- but I feel I know less here than I do about vaccines and am less confident).

But the "picture" of how science literacy influences public opinion vaccines and GM foods-- two risk issues that aren't genuinely culturally polarizing -- is strikingly different from the one we see when we look at issues like climate change, or nuclear power, or fracking, where the toxic fog of antagonistic meanings clearly does impede ordinary citizens' ability to see who knows what about what.

Science comprehension -- knowledge of important scientific information but even more important the habits of mind that make it possible to know things in the way science knows them -- is intrinsically valuable. Even if this capacity in citizens didn't make them better consumers of decision-relevant science, a good society would dedicate itself to propagating it as widely as possible in its citizens because in fact the ability to think is a primary human good.

But who could possibly doubt that science comprehension -- the greatest amount of it, dispersed as widely as possible among the populace -- wouldn't make it more likely that the value of decision-relevant science would be realized by ordinary people in their lives as individuals and as citizens of a democracy?  I certainly wouldn't question that!

The polarizing effect of science literacy on culturally contested issues like climate change is not evidence that popular science comprehension lacks value.

On the contrary, it is merely additional evidence of how damaging a polluted science-communication environment is for the welfare of the diverse citizenry of the Liberal Republic of Science.

Tuesday
Jun112013

Coin toss reveals that 56% (+/- 3%, 0.95 LC) of quarters support NSA's "metadata" monitoring policy! Or why it is absurd to assign significance to survey findings that "x% of American public" thinks y about policy z

Pew Research Center, which in my mind is the best outfit that regularly performs US public opinion surveys (the GSS & NES are the best longitudinal data sets for scholarly research; that's a different matter), issued a super topical report finding that a "majority" -- 56% -- of the U.S. general public deems it "acceptable" (41% "unacceptable") for the "NSA [to be] getting secret court orders to track calls of millions of Americans to investigate terrorism."

Polls like this -- ones that purport to characterize what the public "thinks" about one or another hotly debated national policy issue -- are done all the time.  

It's my impression -- from observing how the surveys are covered in the media and blogosphere-- that people who closely follow public affairs regard these polls as filled with meaning (people who don't closely follow public affairs are unlikely to notice the polls or express views about them).  These highly engaged people infer that such surveys indicate how people all around them are reacting to significant and controversial policy issues. They think that the public sentiment that such surveys purport to measure is itself likely to be of consequence in shaping the positions that political actors in a democracy take on such policies.

Those understandings of what such polls mean strike me as naive.

The vast majority of the people being polled (assuming they are indeed representative of the US population; in Pew's case, I'm sure they are, but that clearly isn't so for a variety of other polling operations, particularly ones that use unstratified samples recruited in haphazard ways; consider studies based on Mechanical Turk workers, e.g.) have never heard of the policy in question. Never given them a moment's thought.  Their answers are pretty much random -- or at best a noisy indicator of partisan affiliation, if they are able to grasp what the partisan significance of the issue is (most people aren't very partisan and can't reliably grasp the partisan significance of issues that aren't high-profile, perennial ones, like gun control or climate change).

There's a vast literature on this in political science. That literature consistently shows that the vast majority of the U.S. public has precious little knowledge of even the most basic political matters. (Pew -- which usually doesn't do tabloid-style "issue du jour" polling but rather really interesting studies of what the public knows about what -- regularly issues surveys that measure public knowledge of politics too.)

To illustrate, here's something from the survey I featured in yesterday's post.  The survey was performed on a nationally representative on-line sample, assembled by YouGov with recruitment and stratification methods that have been validated in a variety of ways and generate results that Nate Silver gives 2 (+/- 0.07)  thumbs up to.

In the survey, I measured the "political knowledge" of the subjects, using a battery of questions that political scientists typically use to assess how civically engaged & aware people are.

One of the items asks:

How long is the term of office for a United States Senator? Is it

(a) two years

(b) four years

(c) five years or

(d) six years?

 Here are the results:

Got that? Only about 50% of the U.S. population says "6 yrs" is the term of a U.S. Senator (a result very much in keeping with what surveys asking this question generally report).

How should we feel about half the population not knowing the answer to this question?

Well, before you answer, realize that less than 50% actually know the answer.

If the survey respondents here had been blindly guessing, 25% would have said 6 yrs.  So we can be confident the proportion who picked 6 yrs because they knew that was the right answer was less than 50% (how much less? I'm sure there's a mathematically tractable way to form a reasonable estimate -- anyone want to tell us what it is and what figure applying it yields here?).

And now just answer this question: Why on earth would anyone think that even a tiny fraction of a sample less than half of whose members know something as basic as how long the term of a U.S. Senator is (and only 1/3 of whom can name their congressional Representative, and only 1/4 of whom can name both of their Senators...) has ever heard of the "NSA's phone tracking" policy before being asked about it by the pollster? 

Or to put it another way: when advised that "x% of the American public believes y about policy z," why should we think we are learning anything more informative than what a pollster discovered from the opinion-survey equivalent of tossing thousands and thousands of coins in the air and carefully recording which sides they landed on?

Monday
Jun102013

What are fearless white hierarchical individualist males afraid of? Lots of stuff!

I haven't posted any data recently. And I haven't explored/exploded the "white male effect" (WME) in risk perception in a while either.  So lets pack some new data around WME & blow her to smithereens!

Actually, the "white male effect" is one of the most important phenomena -- one of the coolest findings ever -- in the study of public risk perceptions.

WME refers to the tendency of white males to express less concern with (seemingly) all manner of risk than do minorities and women. The finding was first observed by Flynn, Slovic & Mertz (1994) and thereafter systematically charted by Finucane, Slovic, Mertz, Flynn, & Satterfield (2000).

Lots of scholars have looked at it since, trying to figure out what explains it.  Does it reflect some sort of "hard wired" or "genetic" disposition on the part of women to be more concerned about the welfare of others (obvious question: if so, why are minority males more concerned?) Are men evolutionarily programmed to be more "risk seeking" (same obvious question.) Are white males less concerned because they are politically less vulnerable themselves than minorities and women? Or maybe white males are just "getting it right" -- because they are more educated, less vulnerable to cognitive biases?

None of the above is probably the best answer. 

What makes those explanations weak is that there really isn't a "white male effect."  Rather there's a white male hierarch individualist effect

In a study in which I collaborated with Slovic, Braman, Gastil & Mertz (2007), we used the cultural cognition worldview scales as a magnifier to inspect more closely cultural influences observed in Finucane et al. (2000).

What we found, in effect, was that white hierarchical and individualistic males are so extremely skeptical of risks involving, say, the environment or (another thing we looked at) guns that they create the appearance of a  sample-wide "white male" effect.  That effect "disappears" once the extreme skepticism of these individuals (less than 1/6 of the population) is taken into account.  There isn't any WME among individuals who are egalitarian and communitarian, hierarchical communitarian or (in the case of environmental risks) egalitarian and individualistic in their outlooks.

This finding fit the hypothesis that "identity protective cognition" was driving WME.  Identity protective cognition is a form of motivated reasoning.  It describes the tendency of people to fit their perceptions of risk (and related facts) to ones that reflect and reinforce their connection to important affinity groups, membership in which confers psychic, emotional, and material benefits.  The study of cultural cognition reflects the premise that the latent group affinities measured with the "cultural worldview scales" we employ in our studies are the ones motivating risk perceptions in conflicts that polarize the U.S. public.

The sorts of things white hierarchical individualistic males are "unafraid of" are activities essential to the the cultural roles they tend to occupy.  Among people who subscribe to that outlook, men attain status by occupying positions of authority in commerce and industry.  Gun possession plays an important role for men in such groups too--enabling hierarchical roles like father, protector, and provider and symbolizing individualistic (male) virtues like honor and courage and self-reliance.

Because the assertion that such activities are "dangerous" would justify restriction of them by the state -- and invite resentment and stigmatization of those individuals conspicuously identified with them -- hierarchical and individualistic white males have an especially powerful psychological incentive to resist such claims.

That was our conjecture-- one founded generally on Mary Douglas's and Aaron Wildavksy's "cultural theory of risk" -- and the evidence was more consistent with that than with other explanations, we suggested.  Other researchers have corroborated this hypothesis with related but distinct methods (that's a good thing; being able to verify a hypothesis with multiple methods furnishes assurance that the effect is really "there" and not an artifact of a particular way of trying to test for it).

But here's another thing-- or some more evidence, really.  If identity-protective cognition is at work, there's no reason to believe that white hierarchical individualist males will be uniformly more "risk dismissive" than other people.  

They'll be that way only with regard to private activities the regulation which poses a threat to activities essential to their cultural status.  Where regulation itself poses such a threat, they should worry about the risks that such regulation poses.  Moreover, if we can find private activities that threaten their cultural identities, their stake in securing regulation of them should motivate them to be risk sensitive in regard to those activities!

And we see exactly that! I'll show you in brand new data, collected in April and May of this year.

But first let's use these fresh data (mmmm mmmm--don't you love the aroma of freshly regressed data?!) to observe the "classic" white male effect.

This figure illustrates the "effect" with regard to climate change:



Using the "industrial strength risk perception measure," we can see that white males are a lot less worried about climate change than "everyone else."

But consider this figure:

Click on me! Or I'll turn you into a white male hierarch individualist!This graphic, which uses a Monte Carlo simulation to illustrate the results of a multivariate regression analysis, shows that the "white male effect" is being driven by the extreme climate change skepticism of of white hierarchical individualistic males (who are, again, about 1/6 of the population).  There's no meaningful gender or race variance in the rest of the subjects in this nationally representative sample.

Now consider a larger collection of risks:


Holy smokes!

These are the mean scores for white male hierarchical individualists and "everyone else" on a range of risks, the perceptions of which are all measured with the "industrial strength" measure.

What do we see?  Lots of cool things!

For one, we see that those "fearless" white hierarchical individualistic males aren't so brave after all.  Sure climate change doesn't scare them, but the potential impact of restrictions on handguns on the "health, safety, and prosperity" of members of our society sends chills up their spine.

Environmental and government regulations are, of course, scary to them too. Those can wreck the economy. Ask any hierarchical individualistic white male for evidence & he'll have no trouble supplying it -- just look at the financial collapse of 2008.

And let's hope that Obama -- who in the eyes of a hierarchical white male individualist likely can't be counted on to do much of anything good -- will hold firm on marijuana criminalization.  Most people don't think so, but the white male hierarchical individualist knows that the dangers to society from decriminalization would be devastating. 

And what do you know: guns certainly aren't dangerous ("people kill people" etc); but privately owned drones-- yow! Terrifying! (Mystery -- who is disgusted, and why, by drones -- half-solved.)

Hey there are some other cool things here too, don't you think?  Look at childhood vaccines. No one -- not white hierarchical individualistic males nor everyone else -- is concerned.  A surprise only to those who believe what they read in the papers, where the ravings of a small sect regularly transmute into a "growing crisis of public confidence" in vaccines. (To anticipate comments: Yes, the small sect is an unreasoning, noxious health menace and should be opposed; but no, that doesn't mean that it's a sensible risk-communication strategy  to miselad the public about the facts, which show no slippage in the last decade in childhood vaccination rates from their historic levels of well over 90%, and no meaningful increase in the "exemption" rate, which has remained < 1%.)

And here's something I wasn't expecting at all: Look at genetically modified foods.  No cultural dissensus--that's not new. But the apparent consensus that GM foods are risky-- more certainly, than global warming, and more too than anything except terrorism -- that's a change relative to what I've observed in various surveys like this that I've done over the yrs.  

Is that evidence that the effort to protect the science communication environment from being polluted on this issue is failing? Could be; although I still think that the most important thing is to avoid cultural polarization, since that's the form of pollution, I'm convinced, most toxic to the reasoning faculty that ordinary members of the public-- of all cultural outlooks -- use to discern what's known to science.

Okay-- that was fun, wasn't it?

And don't forget about the wildly popular Cultural Cognition Site game show "WSMD?, JA!"  Been a long time since we played that!

References

Douglas, M. & Wildavsky, A.B. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. (University of California Press, Berkeley; 1982).

 Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Flynn, J., Slovic, P. & Mertz, C.K. Gender, Race, and Perception of Environmental Health Risk.Risk Analysis 14, 1101-1108 (1994).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, doi:   (2012).

McCright, A.M. & Dunlap, R.E. Cool dudes: The denial of climate change among conservative white males in the United States. Global Environmental Change 21, 1163-1172 (2011).

Nelson, Julie.  Are Women Really More Risk-Averse than Men?, INET Researcn Note (Sept., 2012)

Nelson, Julie.  Is Dismissing the Precautionary Principle the Manly Thing to Do? Gender and the Economics of Climate Change, INET Research Note (Sept. 2012)

Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Friday
Jun072013

Five theses on science communication: the public and decision-relevant science, part 2

This is the second part of a two-part series that recaps a talk I gave at a meeting of the National Academy of Science's really cool Public Interfaces of the Life Sciences Initiative.

The subject of the talk (slides here) was the public's understanding of what I called "decision relevant science" (DRS)--meaning science that's relevant to the decisions that ordinary members of the public make in the course of their everyday lives as consumers, as parents, as citizens, and the like.

Part 1 recounted a portion of the talk that I invited the audience to imagine came from a reality tv show called "Public comprehension of science--believe it or not!," a program, I said, dedicated to exploring oddities surrounding what the public knows about what's known to science.  The concluding portion of the talk, which I'll reconstruct now, presented five serious points --or points that I at least intend to be serious and be taken seriously--about DRS, each of which in fact could be supported by one of the three "strange but true" stories featured in the just-concluded episode of "Public comprehension of science--believe it or not!"

I. Individuals must accept as known more DRS than they can ever possibly understand

In the first story featured in the show, we learned that individuals belonging to that half of the US population that purports to "believe" in evolution are not more more likely to be able to give a cogent account of the "modern synthesis" (natural selection, genetic variance, and random mutation) than those belonging to the half that asserts "disbelief."  In fact, very small proportions of either group can give such an account.  

Thus, most of the people who quite properly accept evolution as "scientific fact" (including, I'm confident, the vast majority who view those who disbelive in it as pitifully ignorant) believe in something they don't understand.

That's actually not a problem, though.  Indeed, it's a necessity!

The number of things known to science that it makes sense for a practical person to accept as true (that a GPS systems, exquisitely calibrated in line with Einstein's theory of special relativity, will reliably guide him to where he wants to go, for example) far exceed what such an individual could ever hope to comprehend in any meaningful way on his own. Life is too short.

Indeed, it will be a good deal shorter if before accepting that it makes sense not to smoke such a person insists on verifying for himself that smoking causes cancer -- or that before taking antibiotics that they do in fact kill disease-causing bacteria but do not -- as 50% of the U.S. population thinks-- "believe it or not!"--kill viruses.

II. Individuals acquire the insights of DRS by reliably recognizing who has it.

Yet it's okay, really, for a practical, intelligent person not to acquire the knowledge that antibiotics kill only bacteria and not viruses. He doesn't have to have an MD to get the benefits of what's known to medical science.  He only has to know that if he gets sick, the person he should consult and whose advice he should follow is the doctor.  She's the one who knows what science knows there.

That's how, in general, individuals get the benefit of DRS--not by understanding it themselves but by reliably recognizing who knows what about what because they know it in the way that science counts as knowing.  

Why not go to a faith healer or a shaman when one has a sore throat -- or a cancerous legion or persistent hacking cough? Actually, some very tiny fraction of the population does. But that underscores only that there really are in fact people out there whose "knowledge" on matters of consequence to ordinary people's lives are not ones that science would recognize and that precious few people (in a modern liberal market society) treat them as reliable sources of knowledge.

Ordinary people reliably make use of all manner of DRS -- medical science is only one of many kinds -- not because they are experts on all the matters to which DRS speaks but because they are themselves experts at discerning who knows what's known to science.

III.  Public conflict over DRS is a recognition problem, not a comprehension problem.

Yet ordinary members of the public do disagree--often quite spectacularly--about certain elements of DRS. These conflicts are not a consequence of defects in public comprehension of science, however. They are a product of the the failure of ordinary members of the public to converge in the exercise of their normal and normally reliable expert ability to recognize who knows what about what.

Believe it or not, one can work out this conclusion logically on the basis of information related in the "Public Comprehension of Science--Believe it or Not!" show.  

Members of the public, we learned, are (1) divided on climate science and (2) don't understand it (indeed, the ones who "believe" in it, like the ones who believe in evolution, generally don't have a meaningful understanding of what they believe).

But (2) doesn't cause (1).  If it did, we'd expect members of the public to be divided on zillions of additional forms of DRS on which they in fact are not.  Like the efficacy of antibiotics, which half the population believes (mistakenly) kill viruses.  

Or pasteurized milk.  No genuine cultural conflict over that, at least in the US.  And the reason isn't that people have a better grasp of biology than they do of climate science. Rather it's that there, as with the health benefits of antibiotics, they are reaching the same conclusion when they exercise their rational capacity to recognize who knows what science knows on this matter.  

Indeed, those of you who are leaping out your seats with excitement to point out the freaky outlier enclaves in which there is a dispute about pasteurization of milk in the US, save yourself the effort! What makes the spectacle of such conflicts newsworthy is precisely that the advocates of the health benefits of "raw milk" are people whom the media knows the vast run of ordinary people (the news media consumers) will regard as fascinatingly weird.

Because people acquire the insights of DRS by reliably recognizing who knows what science knows, conflicts over DRS must be ones in which they disagree about what those who know what science knows know.

This conclusion has been empirically verified time and again.  

On matters like the risks of climate change, the safety of nuclear power waste disposal, the effects of gun control on crime, and the efficacy and side effects of the HPV vaccine, no one (or no one of consequence, if we are trying to understand public conflict rather as opposed to circus sideshows) is saying "screw the scientists--who cares what they think!"

Rather, everyone is arguing about what "expert scientists" really believe. Using their normal and normally reliable rational powers of recognition, those on both sides are concluding that the view that their side accepts is the one consistent with "scientific consensus."

What distinguishes the small number issues on which we see cultural polarization over DRS from the vast number of ones in which we don't has nothing to do with how much science the public comprehends. Rather, it has everything to do with the peculiar tendency of the former to evade the common capacity enjoyed by culturally diverse citizens to recognize who knows what it is known to science.

IV. The recognition problem reflects a polluted science communication environment.

A feature that these peculiar, recognition-defying issues share is their entanglement in antagonistic cultural meanings. 

For the most part, ordinary people exercise their capacity to recognize who knows what about what by consulting other people "like them."  They are better able to "read" people who share their particular outlooks on life; they enjoy interacting with them more than interacting with people who subscribe to significantly different understandings of the best way to live, and are less likely to get into squabbles with them as they exchange information. "Cultural communities" -- networks of people connected by intense, emotional and like affinities -- are the natural environment, then, for the exercise of ordinary citizen's rational recognition capacity.

Ordinarily, too, these communities, while plural and diverse, point their respective members in the same direction.  Any such community that consistently misled its members about DRS wouldn't last long given how critical DRS is to the flourishing -- indeed, simple survival -- of their members.

But every now and again, for reasons that are not a complete mystery but that are still far from adequately understood, some fact -- like whether the earth is heating up -- comes to be understood as a kind of marker of cultural identity.  

The position one holds on a fact like that will then be experienced by people -- and seen by others (the two are related, of course) -- as a badge of membership in, and loyalty to, one or another cultural group.

At that point, reasonable people become unreasonably resistant to changing their minds--and for reasons that, in a sad and tragic sense, are perfectly rational.  

The stake they have in maintaining group-convergent beliefs will usually be much bigger than any they might have in being "right." Making a "mistake" on the science of climate change, e.g., doesn't affect the risk that any ordinary member of the public or any person or any other thing she cares about faces: she just doesn't matter enough as a a consumer, a voter, a public deliberator etc. to make a difference.  But if she forms a view that is out of line on it from the point of view of those who share her cultural allegiances, then she is likely to suffer tremendous costs--psychic, emotional, and material--given the function that positions on climate change perform in identifying to members of such groups who belongs to it and can be trusted.

These antagonistic meanings, then, can be viewed as a form of pollution in the science communication environment.  They enfeeble the reliable operation of the normally reliable faculties of recognition that ordinarily members of the public use to discern DRS.

People overwhelmingly accept that doctors and public health officials are the authorities to turn to to have access to the health benefits of what's known to science, and ordinarily have little difficulty in discerning what those experts believe and are counseling them to do.  But when facts relating to medical treatments become suffused with culturally antagonistic meanings, ordinary members of the public are not able to figure out what such experts actually know.

The US public isn't divided over the risks and benefits of mandatory vaccination of children for Hepatitis B, a sexually transmitted disease that causes a deadly form of cancer.  Consistent with the recommendation of the CDC and pediatricians, well over 90% of children get the HBV vaccination every year.

Americans are culturally divided, however, over whether children should get the HPV vaccine, which likewise confers immunity to a sexually transmitted disease (the human papillomavirus) that causes a deadly form of cancer. For reasons having to do with the ill-advised process by which it as introduced into the US, the HPV vaccine became suffused with antagonistic cultural meanings--ones relating to gender norms, sexuality, religion, and parental sovereignty.

Parents who want to follow the advise of public health experts can't discern what their position is on the HPV vaccine, even though it is exactly he same as it is on the HBV vaccine.  Experimental studies have confirmed that their exposure to the antagonistic meanings surrounding the former make them unable to form confident judgments about what experts believe about the risks and benefits of the HPV vaccine, even though CDC and pediatricians support it to the same extent as they do the  HBV vaccine and for the same reasons.  

The antagonistic cultural meanings that suffuse issues like climate change and the HPV vaccine confront ordinary people with an extraordinary conflict between knowing what's known to science and being who they are. This toxic environment poses a singular threat to their capacity to make use of DRS to live happy and healthy lives. 

V. Protecting the science communication environment from contamination is a critical aim of the science of science communication.

Repelling that threat demands the development of a systematic societal capacity to protect the science communication environment form the pollution of antagonistic cultural meanings.

Technologies for abating the dangers human beings face are not born with antagonistic cultural meanings.  They acquire them through historical contingencies of myriad forms. Strategic behavior plays a role; but sheer accident and misadventure also contribute.

Understanding the dynamics that govern this pathology is a central aim of the science of science communication.  We can learn how to anticipate and avoid them in connection with emerging forms of practical science, such as nanotechnology and synthetic biology. And we can perfect techniques for removing antagonistic meanings in the remaining instances in which intelligent, self-conscious protective action fails to prevent their release into the science communication environment.

The capacity to reliably recognize what is collectively known is not some form of substitute for attainment of scientific knowledge.  It is in fact a condition of it within the practice of science and outside of it.

In discerning DRS, the public is in fact exercising the most elemental form of human rationality.

Securing the political and social conditions in which that faculty can reliably function is the most important aim of the science of science communication. 

Tuesday
Jun042013

"Public comprehension of science--believe it or not!": the public and decision-relevant science, part 1 

Gave talk yesterday at a meeting of the Public Interfaces of the Life Sciences Iniative of the National Academy of Sciences.  The aim of the Initiative is to identify various avenues—in education, in political life, and in civil society—for enlarging the role that the life sciences play in everyday life. 

The Initiative is typical of the leadership role the NAS has fittingly assumed in integrating the practice of science with the scientific study of how ordinary citizens come to know what is known by science—a commitment on the Academy’s part that was highlighted in its Science of Science Communication Sackler colloquium in the Spring of 2012.

My talk was on how the pubic thinks about decision-relevant science. This is part 1 of 2. But slides for whole thing here

As is well-known to readers of this blog, I believe that doing and communicating science are very different things, even when the sort of science being done is the science of science communication.  Indeed, I believe the “science communication problem”—the persistent failure of the availability of valid science to quiet public controversy over risks and other policy-relevant facts to which that science speaks in a compelling way—is a consequence of our society's failure to devise practices and construct institutions that recognize fully the significance of the communicating-doing distinction.

To effectively communicate this point, I thought I would demonstrate what strikes me—as someone who only who does the science of science communication—as a clever way to communicate what I know to the public.

I told my audience that I would present the first part of my remarks in the style of a “reality tv” program or the like entitled, “Public comprehension of science—believe it or not!,” a show dedicated to sharing with viewers instances of the myriad “ ‘strange but true’ characteristics of the public’s knowledge of what science knows.”

This week’s episode (I told them) would feature three stories:

1.  Evolution: “believing,” “disbelieving” & understanding

About half of the general public in the U.S. does not “believe” that humans “evolved” from other animal species. They “believe” instead that humans were created, as is, by God.

This not surprising news to regular viewers of this program—or likely to anyone else. We are reminded of this fact at least once a year by Gallup, which has been polling Americans about their “belief” in evolution—and reporting more or less the same result—for many many years.

The “strange but true” thing is this: the half of the U.S. population that does “believe” in evolution is no more likely than the half that doesn’t to be able to be pass a high school biology test on the rudiments of how evolution works.

There is, researchers have found again and again, no correlation between whether someone says they “believe” in evolution and their understanding of the concepts of “natural selection,” “genetic variance,” and “random mutation”—the basic elements of the dominant, “modern synthesis” position in the science of evolution.

In fact, distressingly few of either the believers or disbelievers have an accurate comprehension of these dynamics.

And there’s another curious thing about “belief” & “disbelief” in evolution.

It’s definitely possible to teach people the basic elements of the modern synthesis, which are remarkably and elegantly simple. The evidence that supports them is reasonably straightforward too.

But imparting such understanding also has zero effect on the likelihood that those who then demonstrate basic comprehension of evolution say they “believe” in it! 

Researchers have demonstrated this multiple times, too, with both high school and college students.

Strange but true!

2.  Climate change risk perceptions: “fast” & “slow”

This week’s second story involves public comprehension of climate science.

The U.S. public doesn’t get it.

This was the conclusion of a very impressive 1992 study, which found that those members of the public who believed climate change was occurring tended to attribute it to holes in the ozone layer and other irrelevant phenomena.

When researchers re-did the study in 2009, the public was still woefully ignorant of elementary climate science. They found, of course, that a great many members of the public didn’t accept that global temperatures were increasing as a result of human CO2 emissions.

But even among the segment of the public who said they did accept this, the researchers found myriad, remarkable misunderstandings, including the belief that aerosol spray cans were one source of the problem and that cleaning up toxic waste sites would help to ameliorate it.

And here’s another thing.

The public tends to over-rely on cognitive heuristics in forming perceptions of risk. This is the theme, of course, of Daniel Kahneman’s Nobel Prize winning work, and his excellent book Thinking, Fast and Slow.  

Various commentators who draw on Kahneman’s work (but interestingly not Kahneman himself, to my knowledge) assert that “bounded rationality” of the sort documented in this work explains why members of the general public don’t universally share climate scientist’s concern about the dangers that climate change poses to human wellbeing.

But social science evidence has established that those members of the public who are the most science literate, and who score highest in measures of the disposition to use reflective modes of reasoning (the “slow” kind, in Kahneman’s typology) are in fact the most culturally polarized on climate change risks!

As members of the public become more science literate, more numerate, and the like, they don’t converge on what climate scientists know.  They just become more reliable “indicators” of what people who hold particular cultural values believe.

Believe it or not . . . .

3.  Antibiotics: consensus, scientific & public

The last story for this week concerns antibiotics.

There is really no meaningful public controversy—cultural or otherwise—over whether someone who is not feeling well should seek medical treatment, and should take antibiotics if his or her physician prescribes them. 

But 50% of the U.S. public believes that antibiotics kill viruses and not just bacteria.

This is a consistent finding in studies that administer the NSF’s “Science Indicators,” the standard “science literacy test” used to measure what members of the public know about basic science—not just in the U.S. but globally.

Now in fact, the question is a “true-false” one, and so one might conclude that members of the U.S. public are doing no better than chance in their responses here.

But interestingly, U.S. respondents score consistently higher than members of the public from other countries, including Japan, Russian, South Korea, and the EU nations.  So really, we “know more” science than they do here.

Indeed, members of the public in the US tend to score higher on lots of items on the NSF science literacy test.  It really is tempting to say that the US is more science literate than the rest of the world!

Except that members of the rest of the world do so much better than we do on the NSF indicator item that asks whether humans evolved from other animals . . . .

But you know what that actually signifies? That the NSF item on “evolution” isn’t measuring the same thing as the rest of the test.  Those who consistently get 90+% of the questions are only slightly more likely than 50% likely to correctly answer the evolution question.

Actually, that shouldn’t surprise you at this point: it follows, almost logically, from the first story in this show, which related that there is really no relationship between saying one “believes” evolution and having and being able to form an accurate scientific understanding of evolutionary theory.

Social scientists have demonstrated that the “evolution” question is actually not measuring the same “science comprehension” quality in people who take the NSF science literacy test as the other items.  It is measuring their religiosity.

Yet proposals to exclude the evolution question from measures of “science literacy”  in studies that correlate science literacy with other attitudes tend to provoke significant controversy.  Critics say the item should be included even though it indisputably reduces the precision of the science literacy score as a measure of a latent science comprehension aptitude or disposition.

Sad but true. . . .

Next time: Five theses on public understanding and decision-relevant science, each of which can be illustrated using the three stories from this week’s episode of “Public Comprehension of Science—Believe it or Not!”

Not to give anything away, but if you think that what I’ve told you so far means (or even means that I think) the public is irrational, you are very wrong.

Wrong about what it means, and wrong about what public rationality and its relationship to decision-relevant science consist in. 

Part 2.

Thursday
May302013

Polarization on policy-relevant science is not the norm (the "silent denominator" problem)

Ever hear of the Formaldehyde Emissions from Composite Wood Products Act of 2010?

Didn't think so. 

As the Environmental Proection Agency explains, the Act (signed into law by President Obama on July 7, 2010, after being passed, obviously, by both Houses of Congress)

establishes limits for formaldehyde emissions from composite wood products: hardwood plywood, medium-density fiberboard, and particleboard. The national emission standards in the Act mirror standards previously established by the California Air Resources Board for products sold, offered for sale, supplied, used or manufactured for sale in California.

The legislation directs the EPA to promulgate implementing regulations relating to "labeling," "chain of custody requirements," "ultra low-emitting formaldehyde resins," "exceptions ... for products ... containing de minimis amounts of composite wood," etc.  The agency just issued proposed rules for notice & comment yesterday!

Why am I telling you about this?  Well, first of all, because I know you've never heard of this regulatory scheme (if you have, you are a freak and are proud of it, so the point I'm going to make still applies).

Because you haven't, the issue of formaldehyde regulation is absent from your mental inventory of risks managed through the application of scientific knowledge.

Because this law -- along with billions and billions (or at least 10^3's) of others informed by science -- is missing from your risk regulation inventory, there's a serious risk that you are overestimating the frequency with which risk issues provoke cultural polarization.

I'm sure some segment of the population somewhere is really freaked out by formaldehyde and another drinks a glass of it for breakfast everyday just to prove a point. But these citizens are really outliers; whatever group-based conflict there might be about formaldehyde is nothing like the ones over climate change, nuclear power, HPV, guns, etc.

Very very very few risk and other policy issues that turn on science provoke meaningful cultural conflict. The ratio of polarizing to nonpolarizing issues of that sort is miniscule.

That doesn't mean that those issues get regulated in an optimal manner.  But it means that one of the largest obstacles to rational engagement with science in policymaking is absent -- and that's an undeniably good thing for enlightened self-government.

The science-informed policy issues that don't provoke controversy are, of course, boring.  That's why most people don't know about them.

But if you do notice and give some thought to them, a couple of interesting and important things will occur to you.

First, insofar as the number of science-informed policy issues that could provoke cultural polarization is very small relative to the number that actually do, there must be something, and something strange, going on with the ones that actually do end up generating that sort of division.

It's critical to figure out how to fix a broken debate like the one over climate change.

But we should also be figuring out why this sort of weird pathology happens and how we can avoid it.

That's one of the objectives of the science of science communication. Indeed, it's probably the most important contribution this science can make to the welfare of democratic societies.

Second, if you notice all these boring, nonpolarized forms of science-informed risk regulation, you'll realize that the thing that makes some issues become polarized can't be lack of public knowledge about the science surrounding them.

It's true that members of the public don't know sicence much about climate change, nuclear power, the HPV vaccine, etc. But the public doesn't know anything more about the science relating to the vast range of issues that fail to generate polarization.  

Members of the public wouldn't score higher on a "formaldehyde science literacy" test than a climate science literacy test.

Formaldehyde scientists aren't better "science communicators" than climate scientists. 

That doesn't mean, either, that members of the public are necessarily uniformed.  

Obviously, members of the public couldn't possibly be expected to know and understand all the science that is relevant to protecting their health and wellbeing--whether that science informs regulations that protect them from exposure to toxic substances or medical procedures that protect them from diseases. 

But just as a reflective individual doesn't have to have an MD to participate in an informed and meaningful way in his or her receipt of high-quality medical care, so a  reflective citizen doesn't have to have a degree in toxicology or biology to know whether his or her government is making sensible decisions about how to protect the public generally from exposure to environmental toxins.  

In both cases, such a person only has to be able to make an informed judgment that the professionals he or she is relying on to use scientific knowledge know what they are doing and are using what they know to benefit him or her and others whose interests those agents are supposed to be promoting.

Reflective citizens do that all the time.  And one of the aims of science communication is to create and protect the conditions in which democratic citizens can reliably exercise this rational recognition capacity.

Those conditions are missing for climate change and other issues that culturally polarize the public.  In connection with those issues, citizens' rational recognition faculty is being impaired by toxins -- not ones emitted from "composite wood products" but ones being transmitted, either deliberately or by misadventure, by partisan discourse.

One goal of the science of science communication, then, is to protect the quality of the science communication environment from contamination by antagonistic cultural meanings that convert boring, mundane issues of fact that admit of scientific inquiry into divisive symbols of tribal loyalty.

To acquire and use the knowledge necessary to do that, researchers must avoid fixating only on pathological cases like climate change and ignoring the "silent denominator" (or silent members of the denominator) comprising all the science-informed policy issues that don't generate cultural polarization.

We can't expect to be able to accurately prevent and, failing that, diagnose and treat science-communication pathologies unless we start with an informed and psychologically realistic of what citizens know and how in a healthy body politic.

Hey--did you hear about the Chemical Safety Improvement Act that is garnering bipartisan support in the Senate?!  

I didn't think so.

Wednesday
May292013

The impact of "science consensus" surveys -- a graphic presentation

I am really really tired of this topic & am guessing everyone else is too. And for reasons stated in last couple of posts, I think a "market consensus" measure of belief in global warming would be a much more helpful way to measure and communicate the weight & practical importance of scientific evidence on climate change than any number of social science surveys of scientists or of scientific papers (I think we are up to 7 now).

But since I had occasion to construct this graphic to help a group of professional science communicators assess whether the failure to communicate scientific consensus can plausibly be viewed as the source of persistent cultural polarization over climate change in the US, I thought I'd post it.  I've included some "stills," but watch it in slide show mode if you want to get the nature of the empirical proof it embodies.

And here are the answers to the predictable questions:

1. Does that mean "scientific consensus" is irrelevant?

No.

People of all cultural outlooks support policies they believe are consistent with scientific consensus.

But they have to figure out what scientific consensus is, which means they have to assess any evidence that is presented to them on that.

In the current climate of polarization, members of opposing cultural groups predictably credit and discredit such evidence in patterns that reinforce their belief that the scientific consensus is in fact consistent with the position that predominates in their cultural group.

Until the atagonisitic cultural meanings that motivate this selective crediting and discrediting of evidence are dispelled, just flooding the information market with more and more studies of "scientific consensus" won't do any good.

Indeed, it will only amplify the signal of cultural contestation that sustains polarization. 

Meanings first, then facts.

2. Does this mean we should ignore people who are misinforming the public?

No.

But it means that just "correcting" misinformation won't work unless you convey affirming meanings.  

Indeed, in a state of polarized meanings, rapid-response "truth squads" also amplify polarization because they reliably convey the meaning "this is what your side believes -- and we think you are stupid!"

Meanings first, then facts!

3. Does this mean we should just give up?

No.

The only thing anyone should give up is a style of communicating "facts" or anything else that amplifies the message that positions on climate are part of an "us-them" cultural struggle.   

The reason the US and many other liberal democracies are polarized on climate change is not that people are science illiterate or over-rely on heuristic-driven reasoning processes. It isn't that they haven't been told that human CO2 emissions increase global temperatures.  It isn't that they are being exposed to biased news reports or misled by misinformation campaigns. And it certainly isn't that no one has advised them yet about the numerous studies finding "97% of scientists ..." agree that that human activity is causing climate change.

The reason is that we inhabit a science communication environment polluted with toxic partisan meanings on climate change.

Conveying to people -- a large segment of the population in the US & in other countries too-- that accepting evidence on climate change means accepting that members of their cultural community are stupid or corrupt is itself a form of science-communication pollution.  

If you don't think that many ways of communicating "facts" (including the extent of scientific consensus on climate change) convey that meaning, then you just aren't paying attention.

If you think there's no way to communicate facts that avoids conveying this meaning, and in fact affirms the identity of culturally diverse people, you aren't thinking hard enough.

Tuesday
May282013

Now, getting back to disgust: we've done guns & drones; what about *vaccines*?

In a temporary triumph over entropy, I happened upon this really interesting paper -- actually, it's a book chapter -- by philosopher Mark Navin:

Navin uses an interpretive, conjectural style of analysis, mining the expression of anti-vaccine themes in popular discourse.  

I think he is likely overestimating the extent of public concern about vaccines. As Seth Mnookin has chronicled, there is definitely an "anti-vaccine" subculture, and it is definitely a menace--particularly when adherents of it end up concentrated in local communities. But they are tiny, tiny minority of the population. Childhood vaccination rate have been 90-95% (depending on the vaccine), & exemption from vaccination under 1%, for many many years without any meaningful changes.

But I don't think this feature of the paper is particularly significant or casts doubt on Navin's extraction of the dominant moral/emotional themes that pervade anti-vaccine discourse.  Disgust--toward puncturing of the body with needles and the introduction of foreign agents into the blood; toward the aspiration to substitute fabricated and self-consciously managed processes for the ones that "nature" has created for governing human health (including nurturing and protection by mothers)--unmistakably animates the sentiments of the vaccine opponents, historical and contemporary, whom Navin surveys.

There are two cool links between Navin's account & the themes explored in my previous posts.  One is the degree to which the evaluative orientation in these disgust sensibilities cannot be reduced in a satisfactory way to a "conservative" ideology or "moral" outlook.

Navin cites some popular works that suggest that anti-vaccine sentiment is correlated with a "left wing" or "liberal" political view. I've never seen any good evidence of this & the idea that something as peculiar -- as boutiquey -- as being anti-vaccine correlates w/ any widespread cultural style strikes me as implausible. But it is clear enough from Navin's account that the distinctive melange of evaluative themes that inform "disgust" with vaccines are not the sorts of things we'd expect to come out of the mouth of a typical political conservative (or typical anything, really).

This feature of the analysis is in tension with the now-popular claim in moral psychology-- associated most conspicuously with Jonathan Haidt and to a lesser degree with Martha Nussbaum -- that "disgust" is a peculiarly or at least disproportionately "conservative" moral sentiment as opposed to a "liberal" one  (frankly, I think it is odd to classify people in these ways, given how manifestly non-ideological the average member of the public is!). That was a point I was stressing in my account of the role of disgust in aversion to guns (and maybe drones, too!).

The second interesting element of Navin's account is the relationship between disgust and perceptions of harm.  Navin notes that in fact those disgusted by vaccines inevitably do put primary emphasis on the argument that vaccines are inimical to human health.  They rely on "evidence" to make out their claim. But almost certainly what makes them see harm in vaccines -- what guides them selectively to credit and discredit evidence that vaccines poison humans and weaken rather than bolster immunity -- is their disgust with the cultural meaning of vaccines.

This point, too, I think is in tension with the contemporary moral psychology view that sees "liberals" as concerned with "harm" as opposed to "purity," "sanctity" etc.  

The alternative position -- the one I argued for in my previous posts -- is that the moral sensibilities of "liberals" are guided by disgust every bit as just as much those of "conservatives," who are every bit as much as focused, consciously speaking, on "harm" as liberals are.  Both see harm in what disgusts them -- and then seek regulation of such behavior or such activities as a form of harm  prevention.  What distinguishes "liberals" and "conservatives" is only what they find disgusting, a matter that reflects their adherence to opposing cultural norms.

Although the people Navin are describing aren't really either "liberals" or "conservatives" -- and in fact don't subscribe to cultural norms that are very widespread at all in contemporary American society -- his account supports the claim that disgust is in fact a universal moral sentiment, and one that universally informs perceptions of harm.

In this respect, he is aligned with William Miller and Mary Douglas, both of whom he draws on.

Cool paper -- or book chapter!  Indeed, I'm eager to find & read the rest of the manuscript.

Sunday
May262013

Money talks, & without the bias of cultural cognition: so why not listen?

Logic of prediction markets explained by professional science communicatorsGreat ongoing conversation following last post, on how market behavior furnishes alternatives to social science surveys of scientist opinion or scientific literature on weight & practical importance of science relating to climate change.  Urge others to join in, & those participating to continue.

Basically the point is this: 

1. A reflective person could understandably be uncertain how to assess the weight of scientific evidence on climate change and its practical impact (indeed, anyone who professes not to understand this proves only that he or she is not reflective).

2. Such a person can't reasonably be expected to see a social scientist's opinion survey of natural scientists or literature survey of peer-reviewed articles as settling the matter. In constructing the sample for such a survey, the social scientist has to make a judgment about which scientists or which scientific papers to include in the sample. Evaluating the adequacy of the sample-inclusion criteria used for that purposes will confront a reasonable person with issues as open to dispute as the ones that he or she would have had to resolve to assess the weight and practical significance of scientific evidence on climate change. Indeed, many of the issues will be exactly the same.

3. However, a reasonable person would see an index of securities (and like instruments) the value of which depend on global warming actually occurring  as helpful evidence in such circumstances. Market actors are economically, not ideologically motivated. Moreover, cognitive biases are likely to cancel out, leaving only the signal associated with informed assessments, by multiple rational and self-interested actors, of the weight and practical importance of the best available evidence on climate. Indeed, such a person could observe movement in the value of such instruments in relation to the publication of scientific papers or the issuances of IPCC reports etc. as measures of the soundness of those scientific assessments.

Here's another thing:

If reasonable people see that other resonable people, including ones whose priors are different from theirs, are also willing to treat an index of  as a relevant source of evidence that gives them reason to adjust their priors in one way or another (& who don't make the science-illiterate mistake of thinking that 'evidence' "proves' things as opposed to supply reason for treating a hypothsis as more or less likely to be true than one otherwise woudl have estimated), they'll be able to observe evidence of how many people are willing to proceed in this open-minded way. 

That evidence not only allows them to adjust their priors about how many people are like that; it also supplies them, as emotional and moral reciprocators, w/ reason to contribute to the common good of being a person of exactly that sort, modeling for the rest of humanity how sensible people w/ different perceptions about a matter subject to empirical investigation should proceed.

Maybe this would catch on?

So let's listen to the money people and let them lead us into a love-filled, harmonious world.

BTW, if such an index already exists, I wouldn't be surprised. I'd be surprised if it didn't.  So anyone who knows where to find it, please speak up.  

The index, btw, has to consist in securities (and the like) that reflect economic opportunities created by global warming.

It cannot include economic opportunities created by government policies to promote carbon-reduction.  That market will reflect expectations about political forces, not natural ones (a matter that might be interesting but that isn't probative of beliefs in whether climate change will occur--only in what sorts of things will occur in democratic politics, which is governed by its own peculiar laws).

Please join the discussion -- in the comment thread for the "97% of insurance companies -- & hedge funds-- agree!" post.

Friday
May242013

More market consensus on climate change: 97% of insurance companies agree (& hedge funds too!)

This is by no means the only example of "market consensus" on climate change.  

 
At the same time that members of the insurance industry are taking action to mitigate their losses (by promoting adaptation; the "mitigate"/"adaptation" distinction is one of the many infelicities of climate-change speak) other commercial actors are eagerly leaping at the chance to profit from new economic opportunities, including ironically exploitation of oil reserves that can be accessed more readily as polar ice caps melt.

Why isn't this activity exploited more aggressively for communication by those trying to promote public engagement with climate change? Those who doubt the scientific consensus--either because they think it is being calculated incorrectly by social scientists who use one or another method to measure it or because they think climate scientists are biased by ideology, group think, or research-funding blandishments--presumably ought to find the opinion of market actors, who are putting their money where their mouth is (actually, they don't talk much; they are too busy investing), more probative?

The answer, I conjecture, tells us something about the motivations--mainly unconscious, of the cultural cognition sort--of those on both sides of the debate.

Too many climate-change advocates have a hard time seeing/using evidence of this sort because it involves mining insight (as it were; new mining opportunities are also being created by metling permafrost) from the rationality of market behavior, not to mention recognizing that climate change does in fact involve a balance of positive and negative effects, even if on balance it is negative.  

At the same time, too many climate skeptics are unwilling to acknowledge evidence of any sort--even the truth-corroborating price signal of self-interested market behavior!--that lends credence to the scientific underpinnings of those who are making the case for effective collective action to avoid the myriad welfare-threatening upshots of a warming earth. So this evidence doesn't register on them either.
Click me!
Might this be it?

If so, I suppose we should look on the bright side: the two sides are agreeing on something, even if it is simply to ignore one and the same piece of evidence on account of it not fitting their respective worldviews.
Wednesday
May222013

On the science communication value of communicating "scientific consensus": an exchange

So either (1) I am a genius in communication after all (P = 0.03), having provoked John Cook and Scott Johnson to offer thoughtful reflections by strategically feigning a haughty outburst (I acknowledge that I expressed my frustration in a manner that I am not proud of). Or (2) Cook & Johnson are sufficiently motivated by virtuous commitment to intellectual exchange to create one notwithstanding my bad manners (P = 0.97).  

I don’t propose we conduct any sort of experiment to test these competing hypotheses but instead just avail ourselves of our good fortune.

To enable them to have an expression of my position that admits of and is worthy of reasoned response, I’ve reduced the source of my exasperation/frustration with the Cook et al. study to 4 points.  John and Scott’s replies (reflecting their points of view as a scholar of science communication and a science journalist, respectively), follow. 

What should follow that, I hope, are additional reflections and insights from others in the “comments” thread.

Kahan:

1. Scholarly knowledge. The Cook et al. study, which in my view is an elegantly designed and executed empirical assessment, doesn’t meaningfully enlarge knowledge of the state of scientific opinion on climate change. The authors find that 97% of the papers published in peer-reviewed journals between 1991 and 2011 “endorsed” the “scientific consensus” view that human activity is a source of global warming. They report further that a comparable percentage of scientists who authored such papers took that position....

continue reading

Cook:

Many thanks to Dan Kahan for the opportunity to discuss this important (and fascinating) issue of communicating the scientific consensus. I fully concur with Dan’s assertion that we need to be evidence-based in how we approach science communication. Indeed, my PhD research is focused on the very issue of attitude polarization and the psychology of consensus. The Cultural Cognition project, particularly the paperCultural Cognition of Scientific Consensus, has influenced my experiment design. I’m in the process of analysing data that I hope will guide us towards effective climate communication.... 

continue reading

Johnson:

Let me preface this by laying out my biases. I’m thinking about more than just this study/story, though I did cover it. (So there’s that.) I like to cover new studies, and I’d rather not hear that the hard work I put in to that end is pointless, so I’m reacting to Dan’s opinion as it relates to media coverage of studies like this. As an educator with a science background, I also have deficit model motivations—even as I understand that buckets aren’t lining up to be filled and that many are equipped with strainers and sometimes check valves. I am still, in essence, a pourer of what I judge to be useful knowledge. If I didn’t think that was the case, I’m not sure why I’d be trying to communicate (unless it somehow made for lucrative reality television, I guess)....

Continue reading

Tuesday
May212013

Cultural resistance to the science of science communication

I’m in Norway. Just stepped off the plane in fact.

Am going to be giving an address at a conference sponsored by the Center for International Climate and Environmental Research in Oslo. The conference is for professional science communicators (mainly ones associated with universities), and the topic is how to promote effective public dissemination of and engagement with the IPCC's 5th Assessment Report, which will be released officially in October.

Obviously, I will stress that it all comes down to making sure the public gets the message that  the IPCC report reflects “scientific consensus.”

Actually, I will try to communicate something that is very hard to make clear.

When I have the opportunity (and privilege) to address climate scientists and professional science communicators, I often feel that I’m deflating them a bit by advising them that I don’t believe that what scientists say—independently of what they do—is of particular consequence in the formation of public opinion. The average American can’t name a Supreme Court Justice. Say “James Hansen” and he or she is more likely to select “creator of the Muppets” than “climate scientist” on a multiple choice quiz.  Anyone who thinks things could or should be otherwise, moreover, doesn’t have a clue what it is like to be a normal, average, busy person.

There are some genuinely inspired citizen scientist communicators in our society. But to expect them to bear the burden of fixing the science communication problem betrays a naïve—and pernicious—model of how science is communicated.

What’s known to science becomes known to ordinary people—ones to whom what science knows can in fact be quite vital—through a dense network of cultural intermediaries. Moreover, in pluralistic liberal democracies (which are in fact the only types of society in which science can flourish), there will necessarily be a plurality of such networks operating to inform a diverse array of groups whose members share distinctive cultural commitments.

These networks by and large all do a great job. Any that didn’t—any that consistently misled its members about what’s known to science—wouldn’t last long, given the indispensable contribution scientific knowledge makes to human welfare.

The spectacle of cultural conflict over what’s known to science is a pathology—both in the sense of being inimical to human well-being and in the sense of being rare. The number of health- and policy-relevant scientific insights on which there is conflict akin to that over climate science is miniscule relative to the vast number on which there isn’t.

Something has to happen—something unusual—to invest a particular belief about some otherwise mundane issue of fact with cultural meanings that express one’s membership in and loyalty to a particular group.

But once that happens, the value that an ordinary member of the public gets from persisting in a belief that signifies his or her group commitments will likely far outweigh any personal cost from being mistaken. Clearly this is so for climate change: nothing an ordinary person believes about the science of climate change will have any impact on the climate—or any impact on policies to offset any adverse impact human activity might be having on it—because he or she just doesn’t matter enough (as consumer, as voter, as “public deliberator”) to have any impact; if he or she takes the “wrong” position relative to the one that signifies loyalty to his or her cultural group, and the amount of suffering that person has to endure can be immense.

The pathology of cultural conflict over a societal risk like climate change can’t be effectively treated, then, by radiating the patient with a bombardment of “facts.”

It can be treated only with the creation of pluralistic meanings. What needs to be communicated is that the facts on climate change, whatever they might be, are perfectly consistent with the cultural commitments of all the diverse groups that inhabit a pluralistic liberal democracy.  No one has to choose between believing them (or believing anything whatsoever about them) and being who one is as a person with a particular cultural identity.

As I said, communicating this point about science communication is difficult.  Not so much because the ideas or the concepts—or the evidence that shows they are more than a just-so story—are all that hard to explain.

The problem has to do with a kind of cultural resistance to the message that communicating science is about protecting the conditions in which the natural, spontaneous social certification of truth can be expected to happen.

The culture that resists this message, moreover, is not that of “hierarchical individualists” or “egalitarian communitarians.”

It’s the culture of the Liberal Republic of Science, of which we are all citizens.

Nullius in verba.  It’s so absurd! Yet so compelling. So much who we are.


Page 1 ... 7 8 9 10 11 ... 23 Next 20 Entries »