follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Tuesday
Feb092016

"They already got the memo" part 2: More data on the *public consensus* on what "climate scientists think" about human-caused global warming

So as I said, CCP/Annenberg PPC has just conducted a humongous study on climate science literacy.

Yesterday I shared some data on the extent to which ordinary members of the public are politically polarized both on human-caused global warming and on the nature of scientific consensus relating to the same.

I said I was surprised b/c although there was plenty of polarization on both questions, there was less over whether “expert climate scientists” agree that human behavior is causing the earth’s temperature to rise. 

In other studies CCP and others have done, those two questions—are humans causing global warming? do scientists believe that?—generate answers that are more or less interchangeable with one another. 

Indeed, the answers tend to be so highly correlated that it’s absurd to treat them as measuring separate things at all.  Rather, they behave—as all manner of facts relating to a putative societal risk tend to do—as indicators of a latent or unobserved affective stance: basically a generic pro- or con-attitude, toward the assertion that humans are causing climate change. 

In addition, the polarization diminished as subjects’ “Ordinary Science Intelligence” assessment scores increased—because as conservative respondents became more proficient in their capacity to make sense of science by this measure, they become likely to acknowledge that “expert climate scientists” agree that human activity is heating the planet.

That’s super interesting.  Usually when a societal risk becomes entangled in antagonistic cultural meanings, reasoning proficiency (measured with various types of critical reasoning assessments) magnifies polarization relating to any empirical issues relating to it. 

I see this as evidence that perceptions of “scientific consensus” are starting to become detached from the general identity-defining affective orientation that people with different cultural commitments have on climate change. Or to put it differently, beliefs about scientific consensus are now a less reliable indicator of that identity.

But let’s not get carried away: there was still a huge amount of polarization on whether there is scientific consensus on human-caused climate change.

Okay.

Today I want to share with this site’s loyal 14 billion regular subscribers and whoever else is tuning in what ordinary members of the public think climate scientists have concluded on more specific questions relating to human-caused climate change.

The study (featured in Climate-Science Communication and the Measurement Problem) was in fact in the nature of a follow up of an earlier CCP-APPC one that produced the “Ordinary Climate Science Intelligence” assessment, or OCSI_1.0. 

The goal of OCSI_1.0 was to disentangle the measurement of “who people are”—the responses toward climate change that evince the affective stance toward climate change characteristic of their cultural group—from “what they know” about climate science. 

That basic mission was successfully accomplished. Nevertheless, the assessment instrument, OSI_1.0, was itself only so-so. There wasn’t any particular reason to see the kind of climate-science comprehension it measured as all that relevant to ordinary people’s lives.  In addition, the instrument’s measurement precision was concentrated at the high-score end of the distribution.

The current study is part of the effort to develop OSI_2.0, which will have more interesting items and also power to discern differences across a larger portion of the range of knowledge levels within the general population.

Well here is how 600 subjects U.S. adults drawn from a nationally representative panel) responded to some of the OSI_2.0 candidate items.

 

For me, these are the key points:

First, there’s barely any partisan disagreement over what climate scientists believe about the specific causes and consequences of human-caused climate change.

Sure, there’s some daylight between the response of the left-leaning and right-leaning respondents. But the differences are trivial compared to the ones in these same respondents’ beliefs about both the existence of climate change and the nature of scientific consensus.

There is “bipartisan” public consensus in perceptions of what climate scientists “know,” with minor differences only in the intensity with which respondents of opposing outlooks hold those particular impressions.

Second, ordinary members of the public, regardless of what they "believe" about human-caused climate change, know pitifully little about the basic causes and consequences of global warming.

Yes, a substantial majority of respondents, of diverse political views, know that climate scientists understand CO2 emissions to be warming the planet, and that climate scientists expect rising temperatures to result in flooding in many regions.

But they also mistakenly believe that, “according to climate scientists, the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of leukemia” and “skin cancer in human beings, and “reduce photosynthesis by plants.”

They think, incorrectly, that climate scientists have determined that “a warmer climate over the next few decades will increase water evaporation, which will lead to an overall decrease of global sea levels.”

 “Republican” and “Democrat” alike, ordinary members of the public also mistakenly attribute to “climate scientists” the proposition that “human-caused global warming has increased the number of tornadoes in recent decades,” a claim that Bill Nye “science guy” believes but that actual climate scientists don’t, and in fact regularly criticize advocates for leaping up to assert every time a tornado kills dozens of people in one of the plains states.

Third, the overwhelming majority of ordinary citizens, regardless of their political persuasions, already recognize and agree that climate scientists have concluded that global warming is putting human beings in grave danger.

The candidate OSI_2.0 items (only a portion of which are featured here) form two scales.

When one counts up the number of correct responses, OSI_2.0 measures how much people genuinely know about the basic causes and consequences of human-caused global warming.  You can figure out that by scoring.

Alternatively, when one counts up the number of responses, correct or incorrect, that evince a perception of the risks that human-caused climate change poses, OSI_2.0 measures how dreaded climate change is as a societal risk.

No matter what they “believe” about human-caused climate change, very few people do well on the first, knowledge-based scale.

And no matter what they “believe” about human-caused climate change, the vast majority of them  score extremely high on the second, dreadedness scale.

None of this should come as a surpriseThis is exactly the state of affairs revealed by OSI_1.0.

Now in fact, one might think that it’s perfectly fine that ordinary citizens score higher on the “climate change dredadedness” scale than they do on the “climate change science comprehension” one.  Ordinary citizens only need to know the essential gist of what climate scientists are telling them--that global warming; it’s those who ordinary citizens charge with crafting effective solutions who have to get all the details straight.

The problem though is that democratic political discourse over climate change (in most but not all places) doesn’t measure either what ordinary people know or what they feel about climate change.

It measures what the item on “belief in” climate change does: who they are, whose side they are on, in an ugly, pointless, cultural status competition being orchestrated by professional conflict entrepreneurs.

The “science communication problem” for climate change is how to steer the national discussion away from the myriad actors-- all of them--whose style of discourse creates these antagonistic social meanings.

“97% consensus” social marketing campaigns (studies with only partially and misleadingly reported results notwithstanding)  aren’t telling ordinary Americans on either side of the “climate change debate” anything they haven't already heard & indeed accepted: that climate scientists believe human-caused global warming is putting them in a position of extreme peril.

All the "social marketing" of "scientific consensus" does is augment the toxic idioms of contempt that are poisoning our science communication environment. 

It's precisely because of the  assaultive, culturally partisan resonances that this "message" conveys that the question "is there scientific consensus on global warming?," like the question "are humans causing global warming?," measures who people are and not what they know about climate change risks.

More on that “tomorrow.”

 

 

Monday
Feb082016

As their science comprehension increases, do members of the public (a) become more likely to recognize scientific consensus exists on human-caused climate change; (b) become more politically polarized on whether human-caused climate change is happening; or (c) both?!

Holy mackeral! Pr(this)/Pr(virgin mary on french toast)=10^-3So CCP and the Annenberg Public Policy Center just conducted a humongous and humongously cool study on climate science literacy. There’s shitloads of cool stuff in the data!

The study is a follow up to an earlier CCP/APPC study, which investigated whether it is possible to disentangle what people know about climate science from who they are

“Beliefs” about human-caused global warming are an expression of the latter, and are in fact wholly unconnected to the former.  People who say they “don’t believe” in human-caused climate change are as likely (which is to say, extremely likely) to know that human-generated CO2 warms the earth’s atmosphere as are those who say they do “believe in” human-caused climate change.

They are also both as likely-- which is to say again, extremely likely--to harbor comically absurd misunderstandings of climate science: e.g.,  that human generated  CO2 emissions stifles photosynthesis in plants, and that human-caused global warming is expected to cause epidemics of skin cancer.

In other words, no matter what they say they “believe” about climate change, most Americans don’t really know anything about the rudiments of climate science.  They just know -- pretty much every last one of them--that climate scientists believe we are screwed.

The small fraction of those who do know a lot—who can consistently identify what the best available evidence suggests about the causes and consequences of human-caused climate change—are also the most polarized in their professed “beliefs” about climate change.

Interesting.

The central goal of this study was to see what “belief in scientific consensus” measures—to see how it relates to both knowledge of climate science and cultural identity.

click me to see what Americans who are polarized about climate change think climate scientists have established!I’ll get to what we learned about that "tomorrow."

But today I want to show everybody something else that surprised the bejeebers out of me.

Usually when I & my collaborators do a study, we try to pit two plausible but mutually inconsistent hypotheses against each other. I might expect one to be more likely than the other, but I don’t expect anyone including myself to be really “surprised” by the study outcome, no matter what it is. 

Many more things are plausible than are true, and in my view, extricating the latter from the sea of the former—lest we drown in a sea of “just so” stories—is the primary mission of empirical studies.

But still, now and then I get whapped in the face by something I really didn’t see coming!

This finding is like that.

But to set it up, here's a related finding that's  interesting but not totally shocking.

It’s that the association between identity and perceptions of scientific consensus on climate change, while plenty strong, is not as strong as the association between identity and “beliefs” in human-caused climate change.

This means that  “left-leaning” individuals—the ones predisposed to believe in human-caused climate change—are more likely to believe in human caused climate change than to believe there is scientific consensus, while the right-leaning ones—the ones who are predisposed to be skeptical—are more likely to believe that there is scientific consensus that humans are causing climate change than to actually “believe in” it themselves.

Interesting, but still not mind-blowing.

Here’s the truly shocking part:

Got that?

First, as science comprehension goes up, people become more polarized on climate change.

Still not surprising; that’s old, old, old,  old news.

But second, as science comprehension goes up, so does the perception that there is scientific consensus on climate change—no matter what people’s political outlooks are!

Accordingly, as relatively “right-leaning” individuals become progressively more proficient in making sense of scientific information (a facility reflected in their scores on the Ordinary Science Intelligence assessment, which puts a heavy emphasis on critical reasoning skills), they become simultaneously more likely to believe there is “scientific consensus” on human-caused climate change but less likely to “believe” in it themselves! 

Whoa!!! What gives??

I dunno.

One thing that is clear from these data is that it’s ridiculous to claim that “unfamiliarity” with scientific consensus on climate change “causes” non-acceptance of human-caused global warming.

But that shouldn’t surprise anyone. The idea that public conflict over climate change persists because, even after years and years of “consensus messaging” (including a $300 million social-marketing campaign by Al Gore’s “Alliance for Climate Protection”), ordinary Americans still just “haven’t heard” yet that an overwhelming majority climate scientists believe in AGW  is patently absurd. 

(Are you under the impression that there are studies showing that telling someone who doesn't believe in climate change that “97% of scientists accept AGW” will cause him or her to change positions?  No study has ever found that, at least with a US general public sample.  All the studies in question show -- once the mystifying cloud of meaningless path models & 0-100 "certaintly level" measures has been dispelled-- is that immediately after being told that “97% of climate scientists believe in human-caused climate change,” study subjects will compliantly spit back a higher estimate of the percentage of climate scientists who accept AGW.  You wouldn't know it from reading the published papers, but the experiments actually didn’t find that the “message” changed the proportion of subjects who said they “believe in" human caused climate change....)

These new data, though, show that acceptance of “scientific consensus” in fact has a weaker relationship to beliefs in climate change in right-leaning members of the public than it does in left-leaning ones. 

That I just didn’t see coming.

I can come up w/ various “explanations,” but really, I don’t know what to make of this! 

Actually, in any good study the ratio of “weird new puzzles created” to “existing puzzles (provisionally) solved” is always about 5:1. 

That’s great, because it would be really boring to run out of things to puzzle over.

And it should go without saying that learning the truth and conveying it (all of it) accurately are the only way to enable free, reasoning people to use science to improve their lives.

Friday
Feb052016

Is the controversy over climate change a "science communication problem?" Jon Miller's views & mine too (postcard from NAS session on "science literacy & public attitudes toward science")

Gave presentation yesterday before the National Academy of Sciences Committee that is examining the relationship between "public science literacy" & "public attitudes toward science.'  It's really great that NAS is looking at these questions & they've assembled a real '27 Yankees quality lineup of experts to do it.

Really cool thing was that Jon Miller spoke before me & gave a masterful account of the rationale and historical development of the "civic science literacy" philosophy that has animated the NSF Indicators battery.

There was zero disagreement among the presenters-- me & Miller, plus  Philip Kitcher, who advanced an inspiring Dewian conception of science literacy  -- that the public controversy over climate science is not grounded in a deficit in public science comprehension.

It's true that the public doesn't know very much (to put it mildly) about the rudiments of climate science. But that's true of those on both sides, and true too in all the myriad areas in which there isn't any controversy over important forms of decision-relevant science in which there is no controversy and in which the vast majority of ordinary citizens nevertheless recognize and make effective use of the best available evidence.

Strikingly, Miller stated "the climate change controversy is not a 'science communication' problem; it's a political problem."

I think I agree but would put matters differently.  

Miller was arguing that the source of enduring conflict is not a result of the failure of scientists or anyone else to communicate the underlying information clearly but a result of the success of political actors in attaching identity-defining meanings to competing positions, thereby creating social & psychological dynamics that predictably motivate ordinary citizens to fit their beliefs to those that predominate within their political groups.

That's the right explanation, I'd say, but for me this state of affairs is still a science communication problem.  Indeed, the entanglement of facts that admit of scientific inquiry & antagonistic social meanings --ones that turn positions on them into badges of group membership & identity-- is the "science communication problem" for liberal democratic societies.  Those meanings, I've argued, are a form of "science communication environment pollution," the effective avoidance and remediation of which is one of the central objects of the "science of science communication."

I think the only thing at stake in this "disagreement" is how broadly to conceive of "science communication." Miller, understandably, was using the term to describe a discrete form of transaction in which a speaker imparts information about science to a message recipient; I have in mind the less familiar notion of "science communication" as the sum total of processes, many of which involve the tract orienting influence of social norms, that serve to align individual decisionmaking with the best available evidence, the volume of which exceeds the capacity of ordinary individuals to even articulate much less deeply comprehend. 

But that doesn't mean it exceeds their capacity to use that evidence, & in a rational way by effectively exercising appropriately calibrated faculties of recognition that help them to discern who knows what about what.  It's that capacity that is disrupted, degraded, rendered unreliable, by the science-communication environment pollution of antagonistic social meanings.

I doubt Miller would disagree with this.  But I wish we'd had even more time so that I could have put the matter to him this way to see what he'd say! Kitcher too, since in fact the relationship of public science comprehension to democracy is the focus of much of his writing.

Maybe I can entice one or the other or both into doing a guest blog, although in fact the 14 billion member audience for this blog might be slightly smaller than the ones they are used to addressing on a daily basis. 

 


 

 

Thursday
Jan282016

CCP/Annenberg PPC Science of Science Communication Lab, Session 2: Measuring relative curiosity

During my stay here at APPC, we'll be having weekly "science of science communication lab" meetings to discuss our ongoing research projects.  I've decided to post a highlight or two from each meeting.

We just had the 2nd, which means I'm one behind.  I'll post the "session 1 highlight" "tomorrow."

One of the major projects for the spring is "Study 2" in the CCP/APPC Evidence-based Science Filmmaking Initiative.  For this session, we hosted two of our key science filmmaker collaborators, Katie Carpenter & Laura Helft, who helped us reflect on the design of the study.

One thing that came up during the session was the distribution of “science curiosity” in the general population.

The development of a reliable and valid measure of science curiosity—the “Science Curiosity Scale” (SCS_1.0)—was one of the principal objectives of Study 1.  As discussed previously, SCS worked great, not only displaying very healthy psychometric properties but also predicting with an admirable degree of accuracy engagement with a clip from Your Inner Fish, ESFI collaborator Tangled Bank Studio’s award-winning film on evolution.

Indeed, one of the coolest findings was that individuals who were comparably high in science curiosity (as measured by SCS) were comparably engaged by the clip (as measured by view time, request for the full documentary, and other indicators) irrespective of whether they said they “believed in” evolution.

Evolution disbelievers who were high in science curiosity also reported finding the clip to be an accurate and convincing account of the origins of human color vision.

But it’s natural to wonder: how likely is someone who disbelieves in evolution to be high in science curiosity?

The report addresses the distribution of science curiosity among various population subgroups.  The information is presented in a graphic that displays the mean SCS scores for opposing subgroups (men and women, whites and nonwhites, etc).

Scores on SCS (computed using Item Response Theory) are standardized. That is, the scale has a mean of 0, and units are measured in standard deviations.

The graphic, then, shows that in no case was any subgroup’s mean SCS score higher or lower than 1/4 of a standard deviation from the sample mean on the scale. The Report suggested that this was a reason to treat the differences as so small as to lack any practical importance.

Indeed, the graphic display was consciously selected to help communicate that.  Had the Report merely characterized the scores of subgroups as “significantly different” from one another, it would have risked provoking the Pavlovian form of inferential illiteracy that consists in treating “statistically significant” as in itself supporting a meaningful inference about how the world works, a reaction that is very very hard to deter no matter how hard one tries

By representing the scores of the opposing groups in relation to the scale's standard-deviation units on the y-axis, it was hoped that reflective readers would discern that the differences among the groups were indeed far too small to get worked up over—that all the groups, including the one whose members were above average in science comprehension (as measured by the Ordinary Science Intelligence assessment), had science curiosity scores that differed only trivially from the population mean (“less than 1/4 of a standard deviation--SEE???”).

But as came up at the session, this graphic is pretty lame.

Even most reflective people don’t have good intuitions about the practical import of differences in fractions of standards of a deviation.   Aside from being able to see that there's not even a trace of difference between whites & nonwhites, readers can still see that there are differences in science curiosity levels & still wonder exactly what they mean in practical terms.

So what might work better?

Why—likelihood ratios, of course! Indeed, when Katy Barnhart from APPC spontaneously (and adamantly) insisted that this would be a superior way to graph this data, I was really jazzed!

I’ve written several posts in the last yr or so on how useful likelihood ratios are for characterizing the practical or inferential weight of data.  In the previous posts, I stressed that LRs, unlike “p-values,” convey information on how much more consistent the observed data is with one rather than another competing study hypothesis.

Here LRs can aid practical comprehension by telling us the relative probabilities of observing members of opposing groups at any particular level of SCS.

In the graphics below, the distribution of science curiosity within opposing groups is represented by probability density distributions derived from the means and standard deviations of the groups’ SCS scores. 

As discussed in previous posts, study hypotheses can be represented this way: because any study is subject to measurement error, a study hypothesis can be converted into a probability density distribution of "predicted study outcomes" in which the “mean” is the predicted result and the standard error the one associated with the measurement precision of the study instrument.

If one does this, one can determine the “weight of the evidence” that a study furnishes for one hypothesis relative to another by comparing how likely the observed study result was under each of the the probability-density distributions of “predicted outcomes” associated with the competing hypotheses.

This value—which is simply the relative “heights” of the points on which the observed value falls on the opposing curves—is the logical equivalent of the Bayesian likelihood ratio, or the factor in proportion to which one should update one’s existing assessment of the probability of some hypothesis or proposition.

Here, we can do the same thing.  We know the mean and standard deviations for the SCS scores of opposing groups.  Accordingly, we can determine the relative likelihoods of members of opposing groups attaining any particular SCS score. 

An SCS score that places a person at the 90th percentile is about 1.7x more likely if someone is “above average” in science comprehension (measured by the OSI assessment) than if someone is below average. 

There is a 1.4x greater chance that a person will score at the 90th percentile if that person is male rather than female, and a 1.5x greater chance that the person will do so if he or she has political outlooks to the "left" of center rather than the "right" on a scale that aggreates responses to a 5-point liberal-conservative ideology item and a 7-point party-identification item.

There is a comparable relative probability (1.3x) that a person will score in the 90th percentile of SCS if he or she is below average rather than above average in religiosity (as measured by a composite scale that combines response to items on frequency of prayer, frequency of church attendance, and importance of religion in one’s life).

A 90th-percentile score is about 2x as likely to be achieved by an “evolution believer” than by an “evolution nonbeliever.” 

Accordingly, if we started with two large, equally sized groups of believers and nonbelievers and it just so turned out that there were 100 total from the two groups who had SCS scores in the 90th percentile for the general population, then we’d expect 66 to be evolution believers and 33 of them to be nonbelievers (1 would a Pakistani Dr).

When I put things this way, it should be clear that knowing how much more likely any particular SCS score is for members of one group than members of another doesn’t tell us either how likely any group's members are to attain that score or how likely a person with a particular score is to belong to a any group!

You can figure that out, though, with Bayes’s Theorem. 

If I picked out a person at random from the general population, I'd put the odds at about 11:9 that he or she "believes in" evolution, since about 45% of the population answers "false" when responding to the survey item "Human beings, as we know them, evolved from another species of animal," the evolution-belief item we used.

If you told me the person was in the 90th percentile of SCS, I'd then revise upward my estimate by a factor of 2, putting the odds that he or she believes in evolution at 22:9, or about 70%.

Or if I picked someone out a random from the population, I’d expect the odds to be 9:1 against that person scoring in the 90th percentile or higher. If I learned the individual was above average in science comprehension, I’d adjust my estimate of the odds upwards to 9:1.7 (about 16%); similarly, if learned the individual was below average in science comprehension, I’d adjust my estimate downwards to 15.3:1 (about 6%).

Actually, I’d do something slightly more complicated than this if I wanted to figure out whether the person was in the 90th percentile or above.  In that case, I’d in fact start by calculating not the relative probability of members of the two groups scoring in the 90th percentile but the relative probability of them scoring in the top 10% on SCS, and use that as my likelihood ratio, or the factor by which I update my prior of 9:1. But you get the idea -- give it a try!

So, then, what to say?

I think this way of presenting the data does indeed give more guidance to a reflective person to gauge the relative frequency of science curious individuals across different groups than does simply reporting the mean SCS scores of the group members along with some measure of the precision of the estimated means—whether a “p-value” or a standard error or a 0.95 CI.

It also equips a reflective person to drawn his or her own inferences as to the practical import of such information.

I myself still think the differences in the science curiosity of members of the indicated groups, including those who do and don’t believe in evolution, is not particularly large and definitely not practically meaningful.

But actually, after looking at the data, I do feel that there's a bigger disparity in science curiosity than there should be among citizens who do & don't "believe in" evolution.  A bigger one than there should be among men & women too.  Those differences, even though small, make me anxious that there's something in the environment--the science communication environment--that might well be stifling development of science curiosity across groups.

No one is obliged to experience the wonder and awe of what human beings have been able to learn through science!

But everyone in the Liberal Republic of Science has an equal chance to form and satisfy such a disposition in the free exercise of his or her reason.

Obliterating every obstacle that stands in the way of culturally diverse individuals achieving that good is the ultimate aim of the of the project of which ESFI is a part.

Tuesday
Jan262016

Is the HPV vaccine still politically "hot"? You tell me....

Some more data from latest CCP/Annenberg Public Policy Center's latest "science of science communication" study.

I was curious, among other things, about what the current state of political divisions might be on the risk of the HPV vaccine.

At one point—back in 2006-10, I’d say—the efficacy and safety of the vaccine was indeed culturally contested.

The public was polarized; and state legislatures across the nation ended up rejecting the addition of the vaccine to the schedule of mandatory vaccinations for school enrollment, the first (and only) time that has happened (on that scale) for a vaccine that the CDC had added to the schedule of recommended universal childhood immunizations.

I’ve discussed the background at length, including the decisive contribution that foreseeable, avoidable miscues in the advent of the vaccine made to this sad state of affairs.

I was wondering, though, if things had cooled off.

There is still low HPV uptake. But it’s unclear what the cause is.

Maybe the issue is still a polarizing one.

But even without continuing controversy one would expect rates to be lower insofar as the vaccine still isn’t mandatory outside of DC, Virginia and (recently) Rhode Island.

In addition, there’s reason to believe that pediatricians are gun shy to recommend the vaccine b/c of their recollection of getting burned when the vaccine was introduced.  Their reticence might have outlived the continuing public ambivalence, and now be the source of lower-than-optimal coverage.

So I plotted perceptions of various risk, measured with the Industrial Strength Risk Perception measure, in relation to right-left political outlooks.

I put the biggies—global warming, and fracking (plus terrorism, since I mentioned that yesterday and the issue generated some discussion)--in for comparison.

Also, childhood vaccinations, which as, I've discussed in the past, do not generate a particularly meaningful degree of polarization.

So what to make of this?

Obviously HPV is much less polarizing than the “biggies.”

But the degree of division on HPV doesn’t strike me, at least, as trivial.

Political division on the risks posed by other childhood vaccines is less intense, and still trivial or pretty close to it, particularly insofar as risk is perceived as “low” pretty much across the spectrum.  In truth, though, it strikes me as a tad bigger than what I’ve observed in the past (that’s worrisome. . . .).

But that’s all I have to say for now!

What do other think?

Here, btw, are the wordings for the ISRPM items: 

TERROR. Terrorist attacks inside the United States

FRACKING. “Fracking” (extraction of natural gas by hydraulic fracturing)

VACRISK. Vaccination of children against childhood diseases (such as mumps, measles and rubella)

HPV. Vaccinating adolescents against HPV (the human human papillomavirus)

GWARMING. Global warming

 

Sunday
Jan242016

Weekend update: OMG-- we are now as politically polarized over cell phone radiation as over GM food risks!!!

Some "Industrial Strength Risk Perception Measure" readings from CCP/Annenberg Public Policy Center study administered this month: 

Click for bigger, 3d, virtual reality view!

Interesting but not particularly surprising that polarization over the risk associated with unlawful entry of immigrants rivals that on global warming, which has abated recedntly about as much as the pumping of CO2 into the atmosphere.

Interesting but not surprising to learn (re-learn, actually) that it's nonsense to say Americans are "more afraid of terrorism than climate change b/c the former is more dramatic, emotionally charged" etc. That trope, associated with the "take-heuristics-and-biases-add-water-and-stir" formula of "instant decision science," reflects a false premise: those predisposed to worry about climate change do in fct see the risk it poses as bigger than that posed by domestic terrorism.

And completely boring at this point to learn form the 10^7 time that there is no political division over GM food risk in the general public, despite the constant din in the media and even some academic commentary to this effect.  

Consider this histogram:

The flatness of the distribution is the signature of the sheer noise associated with responses to GM food survey questions, the administration of which, as discussed seven billion times in this blog (once for every two regular blog subscribers!) is an instance of classic "non-opinion" polling. 

Ordinary Americans--the ones who don't spend all day reading and debating politics (99% of them)-- just don't give GM food any thought.  They don't know what GM technology is, that it has been a staple of US agricultural production for decades, and that it is in 80% of the foodstuffs they buy at the market.  

They don't know that the House of Reps passed a bipartisan bill to preempt state-labelling laws, which special-interest groups keep unsucessfully sponsoring in costly state referenda campaigns, and that the Senate will almost surely follow suit, presenting a bill that University of Chicago Democrat Barrack Obama will happily sign w/o more than 1% of the U.S. population noticing (a lot of commentators don't even seem to realize how close this non-issue is to completely disappearing).

Why the professional conflict entrepreneurs have failed in their effort to generate in the U.S. the sort of public division over GM foods that has existed for a long time in Europe is really an interesting puzzle.  It's much more interesting to try to figure out hypotheses for that & test them than to engage in a make-believe debate about why the public is "so worried" about them!

But neither that interesting question nor the boring, faux "public fear of GM foods" question was the focus of the CCP/APPC study.

Some other really cool things were.

Stay tuned!

Sunday
Jan172016

Status report on temporary CCP Lab relocation

we (my chief co-analyst & I) have arrived & resumed operations.  

A short photojournal of our relocation process:

1. Travelling (in custom-designed unit to avoid annoying paparazzi)
 
Inline image 1

2. Wrestling w/ research problem in new work space
 
Inline image 2

3. Taking a short break ....
 
Inline image 1

 

Friday
Jan152016

"I'm going to Jackson, I'm gonna mess around... " Well, Philly, actually

As of today, and until end of academic yr, will be at Annenberg Public Policy Center at University of Pennsylvania, to be a resident scholar in their amazing & inspiring Science of Science Communication project. 

Promise to write often!

Thursday
Jan142016

"Evidence-based Science Filmmaking Initiative," Rep. No. 1: Overview & Conclusions

In the last couple of posts (one on evolution believers' & nonbelievers' engagement with an evolution-science documentary, and another on measuring "science curiosity") I've summarized some of the findings from Study No. 1 of the Annenberg/CCP ESFI--"Evidence-based Science Filmmaking Initiative."

Those findings are described in more detail in a study Report, which also spells out the motivation for the study and its relation to ESFI overall. 

Indeed, the Report is an unusual document--or at least an unusual sort of document to share. 

It isn't styled as announcing to the world the "corroboration of" or "refutation" of some specified set of hypotheses.  It is in fact an internal report prepared for consumption of the the investigators in an ongoing research project, one that is in fact at a very preliminary stage!

Why release something like that?  Well, in part because even at this point in the investigation, we do think there are things to report that will be of interest to other scholars and reflective people generally, many of whom can be counted on to supply us w/ feedback that will itself make what we do next even more useful.

But in addition, one of the aims of the project, in addition to generating evidence relevant to questions of interest to professional science filmmakers, is to model the process of using evidence-based methods to answer those very questions.

As explained in the ESFI "main page," the project is itself meant to supply evidence relevant to the hypothesis that the methods distinctive of the science of science communication can make a positive contribution to the craft of science filmmaking by furnishing those engaged in it with the information relevant to the exercise of their professional judgment. 

Of course, those engaged in ESFI, including its professional science communication members, believe (with varying levels of confidence!), that in fact the science of science communication can make such a contribution; but of course,  too, others, including other professional science filmmakers, are likely to disagree with this conjecture.

I wouldn't say "no point arguing about it" just b/c reasonable, and informed, people can disagree.

But I would say that these are exactly the conditions in which the argument will proceed in a more satisfactory way with additional information of the sort that can be generated by science's signature methods of disciplined observation, reliable measurement, and valid inference.

Hence ESFI: Let's do it -- and see what  a collaboration between professional science filmmakers and allied communicators, on the 1 hand, and & "scientists of science communication" on the other, produces.  Then, on the basis of that evidence, those who are involved in science filmmaking can use their own reason to judge for themselves what that evidence signifies, and update accordingly their assessments of the utility of integrating the science of science communication into the craft of science filmmaking (not to mention related forms of science communication, like science journalism).

Precisely b/c the Report is an internal research document that takes stock of early findings in a multi-stage project, it furnishes a glimpse of the project in action.  It thus gives those who might consider using such methods a chance to form a more concrete picture of what these practices look like, and a chance to use their own experience-informed imaginations to assess what they might do if they could add evidence-based methods to their professional tool kits.

But of course this is only the start-- only the first Report, both of results and of the experience of doing evidence-based filmmaking.

A. Overview and summary conclusions

This report summarizes the preliminary conclusions of Study No. 1 in the Annenberg/CCP “Evidence-based Science Filmmaking Initiative.” The goal of the initiative is to promote the integration of the emerging science of science communication into the craft of science filmmaking. Study No. 1 involved an exploratory investigation of viewer engagement with an excerpt from Your Inner Fish, a documentary on human evolution.

The study had two objectives.

One was to gather evidence relevant to an issue of debate among science filmmakers: what explains the perceived demographic homogeneity of the audience for high-quality documentaries featured on NOVA, Nature, and similar PBS shows? Is the answer the distribution of tastes for learning about scientific discovery in the general population, or instead some feature of those shows collateral to their science content that makes them uncongenial to individuals who subscribe to certain cultural styles?

The other study objective was to model how evidence-based methods could be used by science filmmakers. Hard questions—ones for which the number of plausible answers exceeds the number of correct ones—are endemic to the activity of producing science films. By testing competing conjectures on an issue of consequence to their craft, Study No. 1 illustrates how documentary producers might use empirical methods to enlarge the stock of information pertinent to the exercise of their professional judgment in answering such questions.

Principal conclusions of Study No. 1 include:

1. By combining appropriately subtle self-report items with behavioral and performance-based ones, it is possible to construct a valid scale for measuring individuals’ general motivation to consume information about scientific discovery for personal satisfaction. Desirable properties of the “Science Curiosity Scale” (SCS) include its high degree of measurement precision, its appropriate relationship with science comprehension and other pertinent covariates, and (most importantly) its power to predict meaningful differences in objective manifestations of science curiosity.

2. By similar means, one can construct a satisfactory scale for measuring viewer engagement with material such as that featured in the YIF clip. Such a scale was again formed by combining self-report and objective measures, including duration of viewing time and requested access to the remainder of the documentary. Designated the “Engagement Index” (EI), the scale had the expected relationships with education and general science comprehension. The strongest predictor of EI was the study subjects’ SCS scores.

3. Engagement with the clip did not vary to a meaningful degree among subjects who had comparable SCS scores but opposing “beliefs” about human evolution. Evolution “believers” and “nonbelievers” with high SCS scores formed comparably positive reactions to the YIF clip. The show didn’t “convert” the latter. But like “believers” with high SCS scores, high-scoring “nonbelievers” were very likely to accept the validity of the science featured in the clip. This finding is consistent with research suggesting that professions of “disbelief” in evolution are an indicator of cultural identity that poses no barrier to engagement with scientific information on evolution, so long as that information itself avoids mistaking exacting professions of “belief” for communicating knowledge.

4. Engagement with the show did vary across culturally identifiable groups. The members of one cultural group, whose members are in fact distinguished in part by their pro-technology attitudes, appeared to display less engagement the clip than was predicted by their SCS scores. This finding furnishes at least some support for the conjecture that some fraction of the potential audience for science documentary programing is discouraged from viewing it by uncongenial cultural meanings collateral to the science content of such programming.

5. But additional, more fine-grained analysis of the data is necessary. In particular, the science-communication-professional members of the research team must formulate concrete, alternative hypotheses about the identity of culturally identifiable groups who might well be responding negatively to collateral cultural meanings in the clip. Those hypotheses can in turn be used by the science-of-science-communication team members to develop more fine-tuned cultural profiles that can be used to probe such conjectures.

6. Depending on the results of these additional analyses, next steps would include experimental testing that seeks to modify collateral meanings or cues in a manner that eliminates any disparity in engagement among individuals of diverse cultural identities who share a high level of curiosity about science.

 

 

Wednesday
Jan132016

"SCS_1.0": Measuring science curiosity

 Yesterday, I discussed how evolution "believers" and "nonbelievers" reacted to a cool evolution-science  documentary. The data I described came from Study No. 1 of the Annenberg Public Policy Center/CCP "Evidence-based Science Filmmaking Initiative" (ESFI).

That data suggested that "belief" in evolution wasn't nearly as important to engagement with the documentary (Your Inner Fish, an award-winning film produced by ESFI collaborator Tangled Bank Studios) as was science curiosity.

Today I'll say a bit more about how we measured science curiosity.

Developing a valid and reliable science curiosity scale was one of the principal aims of Study No. 1.  As conceptualized here, science curiosity is not a simple a transient state (Loewenstein 1999) but instead a general disposition, variable in intensity across persons, that reflects the motivation to seek out and consume scientific information for personal pleasure.

Obviously, a measure of this disposition would furnish science journalists, science filmmakers, and related science-communication professionals with a useful tool for perfecting the appeal of their work to those individuals who value it the most. But it could also make myriad other contributions to the advancement of knowledge. 

A valid science curiosity measure could be used to improve science education, for example, by facilitating investigation of the forms of pedagogy most likely to promote its development and harness it to promote learning (Blalock, Lichtenstin, Owen & Pruski 2008). Those who study the science of science communication (Fischhoff & Scheufele 2014; Kahan 2015) could also use a science curiosity measure to deepen their understanding of how public interest in science shapes the responsiveness of democratically accountable institutions to policy-relevant evidence.

Indeed, the benefits of measuring science curiosity are so numerous and so substantial that it would be natural to assume researchers must have created such a measure long ago.  But the simple truth is that they have not. 

“Science interest” measures abound. But every serious attempt to assess their performance has concluded that they are psychometrically weak and, more important, not genuinely predictive of what they are supposed to be assessing—namely, the disposition to seek out and consume scientific information for personal satisfaction (Blalock et al 2008; Osborne, Collins & Simons 2003).

ESFI assumptions: "1. Mathematics is the language of nature. 2. Everything around us can be represented and understood through numbers. 3. If you graph these numbers, patterns emerge. Therefore: ..." click on this, or you are a big fat loser!!!!!ESFI’s “Science Curiosity Scale 1.0” (SCS_1) is an initial step toward filling this gap in the study of science  communication.  The items it comprises, and the process used to select (and combine) them, self-consciously address the defects in existing scales.

One of these is the excessive reliance on self-report measures. Existing scales relentlessly interrogate the respondents on the single topic of their own attraction to or aversion toward information on scientific discovery: “I am curious about the world in which we live,” “I find it boring to hear about new ideas,” “I get bored when watching science programs on TV,” etc.  Items like these are well-known to elicit responses that exaggerate respondents’ possession of desirable traits or attributes.

To counteract this dynamic, SCS_1.0 disguises its objectives by presenting itself as a general “marketing” survey.

Individual self-report items relating specifically to science were thus embedded in discrete blocks or modules, each consisting of ten or more items relating to an array of “topics” that “some people are interested in, and some people are not.” Items were presented in random order, each with on a separate screen. 

There was thus no reason for subjects to suspect that their motivation to learn about science was of particular interest or any opportunity for them to adjust the responses across items in a manner that overstated their interest in it.  A similar strategy was used to gather information on behavior reflecting such an interest, including visits to science museums, attendance at public science lectures, and the reading of books on scientific discovery.

SCS_1.0 also featured an objective performance measure. 

Well into the survey, subjects were advised that we were interested in their reactions to a news story “of interest” to them.  In order to assure that the story was one that in fact matched their interests, they were furnished with discrete news story sets, the shared subject matter of which was be identified by a header and reinforced by the individual story headlines and graphics. One set consisted of science stories; the others ones on popular entertainment, on sports, and on financial news. 

Subjects, we anticipated, were likely to find the prospect of reading a story and answering questions about it burdensome.  Accordingly, the selection of the science set rather than one of the others would be a valid indicator of genuine science interest . Responses to this task were then used to validate the self-reported interest items to help furnish assurance the genuineness of the latter.

When combined, the items displayed the requisite psychometric properties of a valid and reliable scale.  Their unidimensional covariance structure warranted the inference that they were measuring the same latent disposition.  Formed with item response theory, the composite scale weighted responses in relation to the level of that disposition manifested by responses to the particular items. The result was an index—SCS_1.0—that reflected a high degree of measurement precision along the entire population distribution of that trait (Embretson & Paul 2000).

Are *you* science curious? If so, you'll click on this to see how SCS predicts behavior evincing a desire to learn of scientific discoveries!Finally and most importantly, SCS_1.0 was behaviorally validated

As detailed in ESFI Study Report No. 1, subjects were instructed to watch a 10-minute clip from the science documentary Your Inner Fish.  SCS_1.0 strongly predicted engagement with the clip as reflected not only in self-reported interest but also in objective measures such as duration of viewing time and subjects’ election (or not) to be furnished free access to the documentary as a whole.

SCS_1.0 is by no means understood to be an ideal science curiosity measure.  Additional testing is necessary, both to assure the robustness of the scale and to refine its powers to discern the motivation to seek out and consume science information for pleasure.

Moreover, SCS_1.0 was self-consciously designed to assess this disposition in adult members of the public; variants would be appropriate for specialized populations including elementary or secondary school students.

But what SCS_1.0 does do, we believe, is initiate a process that there's every reason to believe will generate measures of genuine value to researchers interested in assessing science curiosity in the general public and in specialized subpopulations.  The researchers associated with CCP’s ESFI and other evidence-based science communication initiatives are eager to participate in that process.  But they are also eager to stimulate others to participate in it either by building on and extending SCS_1.0 or by developing alternatives that genuinely predict behavior that manifests the motivation to seek out and consume scientific information.

Existing “science interest” measures just don’t do that.  SCS_1.0 shows that it is possible to do much better.

References

Besley, J.C. The state of public opinion research on attitudes and understanding of science and technology. Bulletin of Science, Technology & Society, 0270467613496723 (2013).

Blalock, C.L., Lichtenstein, M.J., Owen, S., Pruski, L., Marshall, C. & Toepperwein, M. In Pursuit of Validity: A comprehensive review of science attitude instruments 1935–2005. International Journal of Science Education 30, 961-977 (2008).

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, N.J.: L. Erlbaum Associates.

Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, 14, 1-12 (2015).

Loewenstein, G. The psychology of curiosity: A review and reinterpretation. Psychological Bulletin 116, 75 (1994).

National Science Foundation. Science and Engineering Indicators, 2010 (National Science Foundation, Arlington, Va., 2014).

Osborne, J., Simon, S. & Collins, S. Attitudes towards science: A review of the literature and its implications. International journal of science education 25, 1049-1079 (2003).

Thomas G Reio Jr, Joseph M Petrosko, Albert K Wiswell & Juthamas Thongsukmag, The Measurement and Conceptualization of Curiosity, 167 The Journal of Genetic Psychology 117-135 (2006).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

 

 

 

Monday
Jan112016

The (non)relationship between "believing in" evolution and being engaged by evolutionary science

Are Americans who “disbelieve in” human evolution as likely as those who “believe in” it to be interested in a science documentary on our species’ natural history? Would they accept the evidence in such a documentary as valid and convincing?

“No” and “no” would seem to be the obvious answers.  It’s not as if those who reject human evolution just haven’t been shown the proof yet. However skillfully presented, then, another exposition of evolutionary science, one might think, would be more likely to antagonize them than to pique their interest.

But Study 1 in CCP’s Evidence-based Science Filmmaking Initiative suggests that things aren’t that simple.

The study involved a nationally representative sample of 2500 U.S. adults.  In line with national survey findings that haven't changed for decades (Newport 2014), about 40% of the subjects selected “false” in response to the survey item “Human beings evolved from an earlier species of animal.”

Study subjects were instructed to view as much or as little as they chose of a 10-minute science documentary segment.  The segment was excerpted from Your Inner Fish, an award-winning documentary on evolution that was produced by ESFI collaborator Tangled Bank Studios and that was broadcast on PBS in 2014. The excerpt in question examined the origins of color vision in humans.

The study also measured subjects’ science curiosity and science comprehension. Both of these dispositions were positively correlated with subjects’ acceptance of evolution. But the strength of the relationships was quite modest .  Among those who “believed” in evolution and among those who did not, there were ample numbers of study subjects high in science comprehension and science curiosity, and ample numbers of people who were high in neither.

Unsurprisingly, those subjects who ranked highest in science curiosity were substantially more engaged by the segment.  The more curious subjects were, the more likely they were to watch all or a substantial portion of it; to report finding it interesting; and to supply the information necessary to receive free access to the remainder of the documentary (responses aggregated to form an "Engagement Index").

The intensity of the relationship between curiosity and engagement was no less pronounced, moreover, in subjects who said they did not “believe in” evolution than it was among those who said they did.  Low-curiosity  evolution “disbelievers” were in fact slightly less engaged than low-curiosity “believers.”  But neither of those low-curiosity subgroups was nearly as engaged by the clip as were evolution “nonbelievers” who scored high on the science curiosity scale. 

This is evidence, then, that yes, an evolution “nonbeliever” can enjoy an evolution-science documentary—one that uses experiments on monkeys no less to support inferences about the impact of random mutation, natural selection, and genetic variance on modern humans’ perception of color. 

How much an evolution “nonbeliever” will enjoy this documentary depends, the study suggests, on exactly the same thing that an evolution “believer’s” level of enjoyment does: how motivated he or she is to seek out and consume information on science for personal satisfaction--or in a word, how curious that person is about science.

Can an evolution “nonbeliever” find the evidence presented in such a documentary both valid and convincing?

The answer to this question is also "yes"—particularly if he or she is generally curious about science

A low-curiosity evolution “nonbeliever” was about as likely to disagree as he was to agree that the clip was “convincing,” and that it “supplied strong evidence of how humans acquired color vision.”  But the probability a high-curiosity “nonbeliever” would agree with these characterizations of the validity of the information in the segment was well over 75%.

Note, though, that the curious “nonbelievers” who indicated that they found the evidence “strong” and “convincing” did not “change their minds” on human evolution. 

Is that surprising? It won’t be to anyone familiar with empirical study of the relationship between professions of “belief” in evolution and comprehension of science.

That research consistently finds no correlation between how people respond to “true-false” human-evolution survey items and their ability to give a cogent account of natural selection, genetic variance, and random mutation (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990). 

Researchers also find that students who say they don’t believe in evolution can learn these important insights just as readily as those who say they do believe in it—as long as the teacher doesn’t make the mistake of conveying that the point of the instruction is to extract a profession of “belief” from the former, a style of pedagogy that needlessly pits students’ interest in learning against their interest in being faithful to their cultural identities (Lawson & Worsnop 1999).

What people say they “believe” about human evolution doesn’t indicate what people know; it expresses who they are, culturally speaking (Long 2011). 

Professing rejection of evolution coheres with a cultural style that features religiosity (Roos 2012). It is precisely because the answer “false” signifies their defining commitments that individuals with this identity balk when educators make the mistake (itself a sign of inattention to empirical research) of conflating transmission of knowledge with extracting professions of “belief” in it. 

When put in the position of having to choose between being who they are and expressing what they know, free, reasoning people understandably opt for the former (Hameed 2015). Indeed, they can be expected to dedicate all of their reasoning proficiency to doing so: the higher the science literacy score of someone who subscribes to a religious cultural identity, the more likely he or she is to respond negatively to the “true-false” survey item “human beings evolved from an earlier species of animal” (Kahan 2015).

Our study captured this form of of identity-protective cognition, too.   

Again, science curiosity was positively correlated with levels of engagement and with levels of perceived validity for both evolution believers and evolution nonbelievers.  But this was not the case for science comprehension: as subjects’ scores on the Ordinary Science Intelligence assessment test (Kahan in press; Kahan, Peters et al. 2012) increased, evolution believers became more engaged and more convinced by the clip, while evolution disbelievers became less so.

This result was driven by the negative reactions of evolution nonbelievers who were simultaneously high in science comprehension and low in science curiosity. These study subjects were by far the least engaged by the clip and the least likely to view the evidence it presented as valid.

Nonbelievers who scored high on both the science curiosity and science comprehension scales, in contrast, were highly engaged by the documentary segment and highly likely to deem it a strong and convincing account of the origins of human color vision.

People use their reason for multiple ends. One of these is to form the dispositions and attitudes that enable them to reliably experience and express their commitment to a shared way of life.  Another of these is to attain goals—from personal health to professional success—that can be effectively achieved only with what science knows (Kahan 2015).

People who are curious about science have a goal that those who aren’t curious don’t: to satisfy their appetite to understand the insights generated by use of science’s signature methods of observation, measurement, and inference. EFSI Study 1 shows that such a person can satisfy that goal by enjoying a skillfully made science documentary about evolution even if she has an identity that is itself enabled by professing “disbelief” in it. 

In this respect, the results of the study are in line with those that show that individuals who hold a religious identity associated with disbelief in evolution can still learn what science knows about the natural history of human beings and, if they choose, even use that knowledge to engage in activities, such as the practice of medicine or scientific research, that are uniquely enabled by such knowledge (Lawson & Worsnop 1999; Everhart & Hameed 2013).

People who are low in science curiosity can be expected to engage information on it for one purpose only: to be the sorts of persons, culturally speaking, enabled by their respective states of “belief” or “disbelief.”  Making use of information for that end is another one of things people can do even better if they possess the sort of reasoning proficiency associated with high science comprehension.  Accordingly, individuals who scored high in science comprehension but low in science curiosity (the two dispositions are only weakly correlated) predictably formed attitudes—of “engagement” and “acceptance”—that accurately manifested their cultural identities.

What to make of all this?

Well, for one thing, it is very much worth acknowledging that this interpretation of the data from ESFI Study No. 1, like all interpretations of any data, is provisional.  Additional studies, additional evidence might well furnish grounds for revising this understanding.

But it’s also very worth pointing out that the engagement enjoyed by science-curious evolution “nonbelievers,” as well as the experience of edification reflected in their response to the study's “accuracy” items, belies the simple—indeed simplistic—picture of how those who profess any particular “position” on evolution feel about science. 

In particular, it is wrong to infer that those who profess nonacceptance necessarily lack either the desire to know or the capacity to experience awe and wonder at the knowledge human beings have acquired through science, including the astonishing insights into their own natural history.

Because science curiosity does not discriminate on the basis of cultural identity, it would be a mistake for anyone who is genuinely committed to communicating science in culturally pluralistic society to adopt a style of  discourse that forces curious, reflective people to choose between  satisfying their appetite to know what’s known to science and being the sort of person that they are.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Cultural Cognition Project, Evidence-based Science Filmaking Initiative Study No. 1 (2015).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evolution: Educ. & Outreach 6, 1-8 (2013).

Hameed, S. Making sense of Islamic creationism in Europe. Public Understanding of Science 24, 388-399 (2015).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

 Kahan, D.M. "Ordinary Science Intelligence": A Science Comprehension Measure for the Study of Risk and Science Communication. J. Risk Res. (in press).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Long, D.E. Evolution and religion in American education : an ethnography (Springer, Dordrecht, 2011).


Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

Newport, F. In U.S. 42% Believe in Creationist View of Human Origins. Gallup. (June 14, 2014),http://www.gallup.com/poll/170822/believe-creationist-view-human-origins.aspx.

Roos, J.M. Measuring science or religion? A measurement analysis of the National Science Foundation sponsored science literacy scale 2006–2010. Public Understanding of Science  (2012).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006). 

 

Sunday
Jan102016

It's here: Annenberg/CCP "Evidence-based Science Filmmaking Initiative"

As the 14 bilion readers of this blog can attest, when I say I'm going to do something "tomorrow" or "Monday" or "soon" or "June 31"-- I'm not kidding around: I mean "tomorrow" or "Monday" or "soon" or "June 31" or whatever the heck I said.

So just as promised "yesterday" [not counting the weekend] & foreshadowed "not so long ago" [the conjugate of "soon"] ...

Here is CCP's new Evidence-based Science Filmmaking Initiative!  (aka "Science of science filmmaking" initiative--title soon to be put to a vote on this site)!

I'm not going to say a lot at this point.  For one thing, there's plenty of material emanating from the "project page," so you can just poke around yourself all day on your own.

Also there's the Initiative's first "Report."  It describes the results of a big preliminary study aimed at investigating the "Missing Audience Hypothesis" (a conjecture that was in fact featured in an earlier blog post and that provoked a pretty interesting discussion).

The study had all kinds of cool things in it, including a "Science Curiosity Scale" (SCS) which was self-consciously designed to remedy (or at least start to remedy) the defects in existing measures. As discussed previously in this blog (as I've mentioned innumerable times, I am loath to repeat myself in my posts, but I'll make an exception here), existing "science curiosity" measures are dominated by ill-formulated self-report items that exihibit lousy psychometric performance and have that never been shown to predict behavior evincing an interest in science.

Our "SCS" index includes some self-report measures (discretely bundled in with numerous other types of items of the sort that one might expect to see if one were participating in consumer-marketing survey), but it combines them them with performance and behavioral ones.

To validate SCS, we--the ESFI science filmmaking professionals and "science of science communication" researchers who collaborated on this study-- assessed its power to predict the level of subject engagement (also behaviorally measured) with a segment of a cool science documentary, Your Inner Fish, produced by ESFI collaborator Tangled Bank Studios.

The study also found some other really really cool things, including how engagement interacted with "belief in" evolution and science comprehension.  

But I'll spare you the details.

Why? Because they are summarized in the "project pages," and spelled out in even greater detail in the Report, which of course, you can download!

I'll also say more on various of these matters in subsequent posts, which will supplement the analyses and interpretations in the project pages and Report.  

In case you haven't noticed, I'm loath ever to repeat myself in this blog.  So I will hold back for now.

And say more "tomorrow."

But by all means, feel free to offer your own views on any of the materials that appear in the Report or the sections of the site dedicated to ESFI, whose members consist of a both accomplished science science communication professionals and and empirical researchers all eager to explore the integration of the science of science communication into the craft of science filmmaking..

Saturday
Jan092016

Weekend update: the anti- "fact inventory conception of science literacy" movement is gaining ground on Tea Party & Trascism [Trump+Fascism]; to eclipse them, only thing it needs is a catchier name!

A friend pointed me toward this really interesting article:

The nerve of the piece is a critique of the "fact inventory" conception of science comprehension that informs the NSF's Science Indicators' battery:

The bigger issue, however, is whether we ought to call someone who gets those questions right “scientifically literate.” Scientific literacy has little to do with memorizing information and a lot to do with a rational approach to problems....

[T]he interpretation of data requires critical thinking.... Our schools don’t train people to be vigilant about avoiding errors such as confounding correlation and causation, however, nor do they do a good job of rooting out confirmation bias or teaching the basics of statistics and probabilities. All of this leads to the propagation of a lot of nonsense in the press and internet, and it leaves people vulnerable to the flood of “facts.”

It’s not possible for everyone—or anyone—to be sufficiently well trained in science to analyze data from multiple fields and come up with sound, independent interpretations. I spent decades in medical research, but I will never understand particle physics, and I’ve forgotten almost everything I ever learned about inorganic chemistry. It is possible, however, to learn enough about the powers and limitations of the scientific method to intelligently determine which claims made by scientists are likely to be true and which deserve skepticism. . . . Most importantly, if we want future generations to be truly scientifically literate, we should teach our children that science is not a collection of immutable facts but a method for temporarily setting aside some of our ubiquitous human frailties, our biases and irrationality, our longing to confirm our most comforting beliefs, our mental laziness. Facts can be used in the way a drunk uses a lamppost, for support. Science illuminates the universe.

Wow.

For sure I couldn't have said this better.  Anyone can confirm this for him- or herself by reviewing the various posts I've written criticizing the "fact inventory" conception of science literacy and defending an "ordinary science intelligence" alternative that features the types of critical reasoning proficiencies essential to recognizing and making use of valid scientific evidence.

Maybe I'm jumping the gun, but I hope this thoughtful and reflective article is a harbinger of more of the same, and the beginning of a wider discussion of this problem.

If I have any quibble with Teller's argument, though, it is over what the nature of the problem actually is.

Teller starts with the premise that the U.S. public has a poor comprehension of science and attributes this to the "fact inventory" conception of science literacy.

She might be right-- but I'm not sure.

I'm not sure, that is, that the American public's science comprehension is as poor as she assumes it is. The reason I'm not sure is that I don't think we've been assessing the general public's science comprehension with a valid measure of that capacity -- one that features critical reasoning proficiencies rather than a"fact inventory"!

Developing a public science comprehension measure focused on the reasoning proficiencies that Teller conviciningly emphasizes has been one focus of CCP reasearch over the last few years.  The progress made so far in that effort is reflected in the current version, "2.0," of the "Ordinary Science Intelligence" assessment test (Kahan in press).

As discussed in previous posts, OSI_2.0 doesn't try to certify respondents' acquisition of any set of canonical "factual" beliefs. 

Instead, it uses quantitative and critical reasoning items that are intended to assess a latent or unobserved disposition suited for recognizing and making appropriate use of valid empirical evidence in one's "ordinary," everyday life as a consumer, a participant in today's economy, and as a democratic citizen.

Since at least 1910 (my memory is hazy for events earlier than that), when Dewey published his famous "Science as Subject-Matter and as Method," the idea that science pedagogy should be focused on cultivating the distinctive reasoning proficiencies associated with making valid inferences from reliable observations has exerted a powerful force on the imaginations and motivations of a good number of educators and scholars (today I think of Jon Baron (1993, 2008) as the foremost champion of this view).

One thing they've learned is that imparting this sort of capacity is easier said than done!

But in any event, they are right -- as is Teller -- that this kind of thinking disposition is the proper object of science education.

The much more pedestrian point I find myself making now & again is that we really don't have a good general public measure of this capacity -- and so aren't even in a good position to figure out how well or poorly we are doing in equipping citizens with it.

Necessarily, too, without such a good measure, we won't be as smart as we ought to be about what contribution defects in science comprehension are making, if any, to public controversies over climate change, nuclear power, the HPV vaccine, and other issues that turn on decision-relevant science.

Teller cites the 2012 CCP study that found that higher science literacy is associated with greater polarization, not less, on climate change risks (nuclear power ones too).

I think that study helps to show that this sort of conflict is not plausibly attributed to defects in science comprehension. Precisely b/c I and my collaborators agree with Teller that a "fact inventory" conception of "science literacy" is defective, we used a science comprehension measure-- OSI_1.0-- that combined certain NSF Indicator "basic fact" items with a Numeracy battery, which has been shown to be highly effective in measuring the capacity of ordinary members of the public & others to reason well with quantitative information. 

People who scored high on that critical reasoning measure still polarized on climate change.

And the same is true of people who score the highest on even reasoning-proficiency centered OSI_2.0:


Most people, sadly, don't know very much about the science of climate change.

But the few who actually can reliably identify its causes and consequences (as measured by version 1.0 of the "Ordinary Climate Science Intelligence" test, an assessment based on "climate science literacy" items drawn from NASA, NOAA, and the IPCC) are also the most politically polarized on the question of whether human activity is the principal cause of climate change -- or indeed on whether climate change is happening at all (Kahan 2015a).

That evidence has lead me to conclude that the conflict over climate change (not to mention numerous other disputed issues of science) isn't about what people know.  It is about who they are: the "beliefs" people form on these issues are ones suited to helping them form affective orientations toward these issues that effectively signal their membership in & loyalty to groups embroiled in a nasty form of cultural status competition....

That problem isn't being caused by any deficiency  in science education in this country.

On the contrary, that problem is preventing our democracy from getting the benefit of whatever scientific knowledge & reasoning capacity we have managed to impart in our citizens.

If we want enlightened democracy, we better figure out how to extricate science from these sorts of ugly, illiberal, reason-eviscerating forms of cultural conflict (Kahan 2015b).

Of course, these are provisional conclusions, informed by what I regard as the best available evidence.

But the best evidence available definitely isn't as good as it should be for exactly the reason that Teller describes so articulately: we don't possess as good a measure of public science comprehension as we ought to have.

This is how I put it at the end of “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change:

The scale development exercise that generated OSI_2.0 is offered as an admittedly modest contribution to an objective of grand dimensions. How ordinary citizens come to know what is collectively known by science is simultaneously a mystery that excites deep scholarly curiosity and a practical problem that motivates urgent attention by those charged with assuring democratic societies make effective use of the collective knowledge at their disposal. An appropriately discerning and focused instrument for measuring individual differences in the cognitive capacities essential to recognizing what is known to science is essential to progress in these convergent inquiries.

The claim made on behalf of OSI_2.0 is not that it fully satisfies this need. It is presented instead to show the large degree of progress that can be made toward creating such an instrument, and the likely advances in insight that can be realized in the interim, if scholars studying risk perception and science communication make adapting and refining admittedly imperfect existing measures, rather than passively employing them as they are, a routine component of their ongoing explorations.

Not as articulate as Teller-- but the best I can do! 

And hey-- if my best motivates others who can do a better job still, then I figure I'm doing my part.

References 

Baron, J. Thinking and deciding (Cambridge University Press, New York, 2008).


Baron, J. Why Teach Thinking?‐An Essay. Applied Psychology 42, 191-214 (1993).

Dewey, J. Science as Subject-matter and as Method. Science 31, 121-127 (1910).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res. (in press).

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

 

Friday
Jan082016

Prepare yourself ... CCP's Evidence-based Science Filmmaking Initiative

I told you "more soon"; well soon is going to be like Monday ...

 

Thursday
Jan072016

Still time to get your "entry" in for MAPKIA #939! But hurry!

5-time MAPKIA winnder, @Mw. Sure she uses performance-enhancing drugs -- & so can you!In recognition of the impact that the Macao internet outage has had on the posting of entries in the ongoing "MAPKIA!" contest, we are extending the time for posting entries.  

Besides, literally 10^3s (figureatively speaking) of entries have been delivered offline by emails, fedex deliveries, telegraphs, mental telepathy & other alternative channels during the outage (thank goodness they found the squirrel who was gnawing on the internet tubes and reclocated him to one of the nation's 10^3s  "wildlife" preserves (figureatively speaking)). It's going to take me a while to process all of them!

So just go to the "comments" section for the post & make your own predictions (supported by a "cogent" theory)--right now while there is still time to compete for the fame & notoreity--not to mention cool prizes!--that winning a MAPKIA confers!

Wednesday
Jan062016

Join the SBST Team: Neither nudge nor shove will stop us from improving your life (whether you are aware of it or not) in 2016 & beyond!

click me -- I need your attention!Wow, I got a cool email announcement about a "one year fellowship" position in the White House Social and Behavioral Sciences Team (SBST). 

The "Team's" mission is to use behavioral economics--primarily of the "nudge" variety--to steer people into making decisons that mesh better with one or another government program aimed at improving a variety of social and economic outcomes, from the proportion of peple obtaining higher education to the proportion of small businesses that keep afloat; from living a more healthy life to availing oneself of myriad govt benefits etc.

Interesting stuff.

But what struck me is the casual assumption that SBST is going to happily outlive the Obama Administration.

Obama is a classic "University of Chicago Democrat"--someone who substitutes for the old style passion of Neal Deal liberalism a cool confidence in technocratic management strategies, many of which tweak but don't fundamentally question "private orderings" as a means of promoting collective wellbeing (distributional justice, an aim of the old-style New Deal Democrat liberalism just as fundamental as collective well-being, has shrunk in importance to near invisibility in the U of C Democratic program).

This is Cass Sunstein's liberalism, not John Kenneth Galbraith's, much less Ted Kennedy's!

But the vision of U of C Democrats is if anything even more obnoxious to the "Chicago School"  neo-liberals and the dyed-in-the-wool social conservatives that cohabit, albeit often uneasily, in the Republican Party.

U of C Democrats say, "hey, we are not only going to take back some share of the profits you've made by exploiting public goods ('you didn't build that!') but we're going to do so with 'strategies' that bypass your reason, so you don't really notice & fail, as a result of 'bounded rationality', to contribute your fair share."

It's hard to think of a program more likely to make the descendants of Hayek & Ayn Rand (what a weird marriage! & what a weird brood of offspring!) see red(s)!

That's one of things that makes the "Fellowship" so damn interesting!

"One year, beginnign in October 2016," you say...

The basis for the "SBST" is an Obama Executive Order that directs all executive agencies to "identify policies, programs, and operations where applying behavioral science insights may yield substantial improvements in public welfare, program outcomes, and program cost effectiveness" and " develop strategies for applying behavioral science insights to programs and, where possible, rigorously test and evaluate the impact of these insights."

To implement this directive, the Nudge Order directs the SBST (also created by the Obama White House) to issue "agencies ... advice and policy guidance to help them execute policy objectives.

This "Nudge Order" (let's call it that; snappier than "Executive Order--Using Behavioral Science Insights to Better Serve the American People") seems to be patterned on the Reagan Executive Order that mandated all executive agencies (only a fraction, actually of the agencies that have been authorized by Congress to engage in significant regulatory activity) submit their proposed regulations to the Office of Management & Um... are those "scrubbing bubbles"?...Budge for "cost benefit analysis."

Decried at the time by traditional New Deal Liberal Democrats, U of C Democrats actually have really grown to like that Reagan order a lot & even proposed extending it!

But I have a feeling that the next President, if he or she is a Republican, isn't going to reciprocate the love when it comes to Obama's "Nudge Order."

Pretty clear, I think, that neither a President Trump nor a President Cruz--both of whom seem to look to a very different source for their "strategies" for "managing" public opinion-- would have much use for the Nudge Order or the apparatus that carries it out.

But I doubt that a President Fiorina, a President Rubio, a President Bush, a President Christie, a President Carson, or a President Paul would either. (I'm sure I'm forgetting somebody-- but who has the memory capacity to keep track of all of them?)

I don't know what a President H.R. Clinton would think--but I would note that President W.J. Clinton was the first & remains the model U of C Democrat President

I know for sure what President Sanders would do w/ the Nudge Order and SBST--and well before Oct. 2017.

So, this is a cool position -- not only b/c the normal job description is interesting but b/c it's certain to be interesting to be "on hand" to witness the Nudge Order "in transition."

Oh, but I've decided not to apply.  I like what I'm doing just fine!

Tuesday
Jan052016

MAPKIA episode #939: What does the Pew "Malthusian Worldview" item predict?!

Winner's prize: Vintage Cultural Cogniton Project Lab Jersey! (Subject to availability)HEY EVERYONE--guess what!

Its time for the first  "MAPKIA!"!  [Make a prediction, know it all!"] episode of 2016!

Yup--this wildly popular feature of the CCP Blog—the #1 most popular game show in Macao for two years running—has been renewed for another season!

Score!

It’s of course inconceivable that anyone doesn’t know the rules, and I don’t mean to insult anyone’s intelligence, but legal niceties do require me to post them before every contest. So here they are:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or this or some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)

Actually, though, the rules are being significantly modified for this particular episode!  The question I’m going to pose has to be answered with data from the Pew’s big hit  “Public vs. the ‘Scientists’ ” Report from last yr.

Ooooooo ... Pew on science literacy & polarization data! Yummy!As you likely all realize, I’ve been going on & on since last yr about the fun that can be had poking around in the “public” portion of Pew’s report.

In previous posts, I showed that the data in Pew’s study (for the public rspts; the data for the AAAS members who formed the “scientist” sample hasn’t been released, at least not yet. . .) corroborates the usual story about politically disputed risks: namely, that as science literacy goes up, cultural polarization (measured by one or another proxy for cultural identity) intensifies in magnitude.

Well, the study also has some interesting “science attitude” items, one of which is this:

I’m going to call this the Pew “Malthusian worldview” item.

“What do you think,” the question effectively asks,

are we in fact just like all the other stupid animals who keep multiplying in number and engorging themselves on all their foodstuffs and other necessary resources until they crash, calamitously, over the top of the Malthusian curve in some massive die off? Or are human beings special precisely because their reason allows them to keep shifting the curve through technological innovation?

Consider climate change to be history’s “biggest ‘I told you so’ ” confirmation of what “Marx wrote about capitalism’s ‘irreparable rift’ with ‘the natural laws of life itself’ ” and what “indigenous peoples" have been "warning[] about the dangers of disrespecting ‘Mother Earth’ [since] long before that”? 

Then answer “2” is for (or just is) you!

Alternatively, when you hear someone talking like that, do you want to let out a primal  WME   “hell noooo!”?  Are you thinking,

Right! These are the same fools who told us that we couldn’t have a city more populous than 200,000 people or we’d be choking to death on our own excrement! Well, thanks to the advent of modern sanitation systems, reinforced with related advances in public health, we can safely inhabit cities orders of magnitude larger and more dense than the ones whose residents regularly succumbed to devastating outbreaks of cholera in the 19th century.

Sure, we'll face some new challenges but we’ll just blast our shit into outer space & everything will be fine-- just you watch & see!

 Hey—did you hear about those cool mirror-coated nanotechnology flying saucer drone things that automatically levitate up to just the right altitude to reflect the sunlight necessary to neutralize climate change & keep temperatures here on earth a comfortable 72 degrees everywhere yr ‘round?

This changes ... nothing!

That's answer number "1" talking!

So the question is, should we expect the Pew item to tap into those two opposing mindsets?

Specifically,

How powerfully (if at all) will responses to the Pew Malthusian Worldview item predict beliefs and attitudes toward technological and environmental risks like climate change, fracking, nuclear power, and GM foods?  Will it be a stronger predictor than political partisanship? Will responses interact with—or essentially amplify—the explanatory power of political ideology and party identification? 

What will the relationship be between the Malthusian Worldview item and science literacy? Will responses be correlated with it—and if so in which direction? Will higher science literacy magnify the correlation between responses to the Malthusian Worldview item and opposing perceptions of environmental and technological risks--just as higher science comprehension magnifies cultural polarization on climate change, nuclear power, fracking, and the like?

Perhaps my framing of the question implies an answer.  But if you think I have one, then obviously mine could be wrong!

“Make a prediction know it all”—and explain cogently the reasoning for it and how one might test your conjecture with Pew dataset items, which have been featured in previous posts and are set forth in their entirety at the Pew site.

Here’s your chance to win not only a great prize but to also to demonstrate to all the schoolchildren in Macao and to billions of other curious and reflective people everywhere that you, unlike everybody else, really knows what the hell you are talking about when it comes to making sense of public perceptions of risk.

Just post your prediction, & take a stab at specifying a testing strategy, in a comment below.  I'll do the analyses & we'll see what you got!

It's that friggin' simple!

Ready ... set ... MAPKIA!

Sunday
Jan032016

"Enough already" vs. "can I have some more, pls": science comprehension & cultural polarization

The motivation for this post is to respond to commentators--@Joshua & @HalMorris—who wonder, reasonably, whether there’s really much point in continuing to examine the relationship between cultural cognition & like mechanisms, on the one hand, and one or another element of science comprehension (cognitive reflection, numeracy, “knowledge of basic facts,” etc).

They acknowledge that evidence that cultural polarization grows in step with proficiency in critical reasoning is useful for, say, discrediting positions like the “knowledge deficit” theory (the view that public conflicts over policy-relevant science are a consequence of public unfamiliarity with the relevant evidence) and the “asymmetry thesis” (the positon that attributes such conflicts to forms of dogmatic thinking distinctive of “right wing” ideology.

But haven’t all those who are amenable to being persuaded by evidence on these points gotten the message by now, they ask?

I agree that the persistence of the “knowledge deficit” view and to a lesser extent the “asymmetry thesis” (which I do think is weakly supported but not nearly so unworthy of being entertained as “knowledge deficit” arguments) likely don’t justify sustained efforts at this point to probe the relationship between cultural cognition and critical reasoning.

But I disagree that those are the only reasons for continuing with—indeed, intensifying—such research.

On the contrary, I think focusing on science comprehension is critical to understanding cultural cognition; to forming an accurate moral assessment of it; and to identifying appropriate responses for managing its potential to interfere with free and reasoning citizens’ attainment of their ends, both individual and collective (Kahan 2015a, 2015b).

I should work more systematically how to convey the basis of this conviction.

But for now, consider these “two conceptions” of cultural cognition and rationality. Maybe doing so will foreshadow the more complete account—or better still, provoke you into helping me to work this issue out in a way that satisfies us both.

1. Cultural cognition as bounded rationality. Persistent public conflict over societal risks (e.g., climate change, nuclear waste disposal, private gun possession, HPV immunization of schoolgirls, etc.) is frequently attributed to overreliance on heuristic, “System 1” as opposed to conscious, effortful “System 2” information processing (e.g., Weber 2006; Sunstein 2005). But in fact, the dynamics that make up the standard “bounded rationality” menagerie—from the “availability effect” to “base rate neglect,” from the “affect heuristic” to the “conjunction fallacy”—apply to people of all manner of political predispositions, and thus don’t on their own cogently explain the most salient feature of public conflicts over societal risks: that people are not simply “confused” about the facts on these issues but systematically divided on them on political grounds.

One account of cultural cognition views it as the dynamic that transforms the mechanisms of “bounded rationality” into fonts of political polarization (Kahan, Slovic, Braman, & Gastil 2006 Kahan 2012). Cultural predispositions thus determine the valence of the sensibilities that govern information processing according in the manner contemplated by the “affect heuristic” (Peters, Burraston & Mertz 2004; Slovic & Peters 1996). The same for the “availability effect”: the stake individuals have in forming “beliefs” that express and reinforce their connection to cultural groups determines what sorts of risk-relevant facts they notice, what significance to them, and how readily they recall them; (Kahan, Jenkins-Smith & Braman 2011). The motivation to form identity-congruent beliefs drives biased search and biased assimilation of information (Kahan, Braman, Cohen, Gastil & Slovic 2010)..—not only on existing contested issues but on novel ones (Kahan, Braman, Slovic, Gastil & Slovic 2009).  

2. Cultural cognition as expressive rationality. Recent scholarship on cultural cognition, however, seems to complicate if not in fact contradict this account!

By treating politically motivated reasoning—of which “cultural cognition” is one operationalization (Kahan in pressb)—as in effect a “moderator” of other more familiar cognitive biases, the “bounded rationality” conception of it implies that cultural cognition is a consequence of over-reliance on heuristic information processing (e.g., Taber & Lodge 2013; Sunstein 2006). If this understanding is correct, then we should expect cultural cognition to be mitigated by proficiency in the sorts of reasoning dispositions essential to conscious, effortful “System 2” information processing.

But in fact, a growing body of evidence suggests that System 2 reasoning dispositions magnify rather than reduce cultural cognition! Experiments show that individuals high in cognitive reflection and numeracy use their distinctive proficiencies to discern what the significance of crediting complex information is for positions associated with their cultural or political identities (Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).

As a result, they more consistently credit information that is in fact identity-affirming and discount information that is identity-threatening. If this is how individuals reason outside of lab conditions, then we should expect to see that individuals highest in the capacities and dispositions necessary to make sense of quantitative information should be the most politically polarized on facts that have become invested with identity-defining significance. And we do see that—on climate change, nuclear power, gun control, and other issues (Kahan 2015; Kahan, Peters, et al., 2012).

This work supports an alternative “expressive” conception of cultural cognition. On this account, cultural cognition is not a consequence of “bounded rationality.” It is a form of engaging information rationally suited for forming affective dispositions that reliably express their group allegiances (cf. Lessig 1995; Akerlof & Kranton 2000).

“Expressing group allegiances” is not just one thing ordinary people do with information on societally contested risks. It is pretty much the only thing they do. The personal “beliefs” ordinary people form on issues like climate change or gun control or nuclear power etc. don’t otherwise have any impact on them. Ordinary individuals just don’t matter enough, as individuals, for anything they do based on their view of the facts on these issues to affect the level of risk they are exposed to or the policies that get adopted to abate them (Kahan 2013, in press). In contrast, it is in fact critical to ordinary people’s well-being—psychic, emotional, and material—to evince attitudes that convey their commitment to their identity-defining groups in the myriad everyday settings in which they can be confident those around them will be assessing their character in this way (Kahan in pressb).

* * * * *

At one point I thought the first conception of cultural cognition was right. Indeed, it didn’t even occur to me, early on, that the second conception existed!

But now I believe the second view is almost certainly right. And that no account that fails to recognize that cultural cognition is integral to individual rationality can possibly make sense of it or manage successfully the influences that create the conflict between expressive rationality and collective rationality that give rise to cultural polarization over policy-relevant facts.

If that’s right, then in fact the continued focus on the interaction of cultural cognition and critical reasoning proficiencies will remain essential.

So is it right? Maybe not; but the only way to figure that out also is to keep probing this interaction.

References

Akerlof, G. A., & Kranton, R. E. (2000). Economics and Identity. Quarterly Journal of Economics, 115(3), 715-753.

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural Cognition of Scientific Consensus. J. Risk Res., 14, 147-174.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (2013). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D. M.. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (ed. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D.M. The expressive rationality of inaccurate perceptions of fact. Brain & Behav. Sci. (in press_a).

Kahan, D.M. The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences (in press_b).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Slovic, P., Braman, D. & Gastil, J. Fear of Democracy: A Cultural Evaluation of Sunstein on Risk. Harvard Law Review 119, 1071-1109 (2006).

Lessig, L. (1995). The Regulation of Social Meaning. U. Chi. L. Rev., 62, 943-1045.

Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge ; New York: Cambridge University Press.

Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).

Slovic, P. & Peters, E. The importance of worldviews in risk perception Risk Decision and Policy 3, 165-170 (1998).

Sunstein, C. R. (2006). Misfearing: A reply. Harvard Law Review, 119(4), 1110-1125.

Weber, E. Experience-Based and Description-Based Perceptions of Long-Term Risk: Why Global Warming does not Scare us (Yet). Climatic Change 77, 103-120 (2006).

Saturday
Jan022016

"Don't jump"--weekend reading: Do judges, loan officers, and baseball umpires suffer from the "gambler's fallacy"?

I know how desperately bored the 14 billion regular subscribers to this blog can get on weekends, and the resulting toll this can exact on the mental health of many times that number of people due to the contagious nature of affective funks. So one of my NY's resolutions is to try to supply subscribers with things to read that can distract them from the frustration of being momentarily shielded from the relentless onslaught of real-world obligation they happily confront during the workweek.

So how about this:

We were all so entertained last year  by Miller & Sanjurjo’s“Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers,” which taught us something profound about the peculiar vulnerabilities to err super smart people can acquire as a result of teaching themselves to avoid common errors associated with interpreting random events.

So I thought, hey, maybe it would be fun for us to take a look at other efforts that try to "expose" non-randomness of events that smart people might be inclined to think are random.

Here's one:

Actually, I'm not sure this is really a paper about the randomness-detection blindspots of people who are really good at detecting probability blindspots in ordinary folks.

It's more in the nature of how expert judgment can be subverted by a run-of-the-mill (of the "-mine"?) cognitive biases involving randomness--here the "gamblers' fallacy": the expectation that the occurrence of independent random events will behave interdependently in a manner consistent with their relative frequency; or more plainly, that an outcome like "heads" in the flipping of a coin can become "due" as a string of alternative outcomes in independent events--"tails" in previous tosses--increases in length.

CMS present data suggesting that behavior of immigration judges, loan officers, and baseball umpires all display this pattern.  That is, all of these professional decisionmakers become more likely than one would expect by chance to make a particular determination--grant an asylum petition; disapprove a loan application; call a "strike"--after a series of previous opposing determinations ("deny," "approve," "ball" etc.).

If you liked puzzling over the the M&S paper, I predict you'll like puzzling through this one.

In figuring out the null, CMS get that it is a mistake, actually, to model the outcomes in question as reflecting a binomial distribution if one is sampling from a finite sequence of past events.  Binary outcomes that occur independently across an indefinite series of trials (i.e., outcomes generated by a Bernoulli process) are not independent when when one samples from a finite sequence of past trials.

In other words, CMS avoid the error that M&S showed the authors of the "hot hand fallacy" studies made.

But figuring out how to do the analysis in a way that avoids this mistake is damn tricky.

If one samples from a finite sequence of events generated by a Bernoulli process, what should the null be for determining whether the probability of a particular outcome following a string of opposing outcomes was "higher" than what could have been expected to occur by chance?

One could figure that out mathematically....  But it's a hell of a lot easier to do it by simulation.

Another tricky thing here is whether the types of events decisionmakers are evaluating here--the merit of immigration petitions, the crediworthiness of loan applicants, and the location of baseball pitches--really are i.i.d. ("independent and identically distributed").

Actually, no one could plausibly think "balls" and "strikes" in baseball are.

A pitcher's decision to throw a "strike" (or attempt to throw one) will be influenced by myriad factors, including the pitch count--i.e., the running tally of "balls" and "strikes" for the current batter, a figure that determines how likely the batter is to "walk" (be allowed to advance to "first base"; shit, do I really need to try to define this stuff? Who the hell doesn't understand baseball?!) or "strike out" on the next pitch.

CMS diligently try to "take account" of the "non-independence" of "balls" and "strikes" in baseball, and like potential influences in the context of judicial decisionmaking and loan applications, in their statistical models. 

But whether they have done so correctly--or done so with the degree of precision necessary to disentangle the impact of those influences from the hypothesized tendency of these decisonmakers to impose on outcomes the sort of constrained variance that would be the signature of the "gambler's fallacy"-- is definitely open to reasonable debate.

Maybe in trying to sort all this out, CMS are also making some errors about randomness that we could expect to see only in super smart people who have trained themselves not to make simple errors?

I dunno!

But b/c I love all 14 billion of you regular CCP subscribers so much, and am so concerned about your mental wellbeing, I'm calling your attention to this paper & asking you-- what do you think?

 

Friday
Jan012016

Critical, must-do 2016 CCP NY resolutions!