follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Entries by Dan Kahan (21)

Friday
May262017

Do conservatives become more concerned with climate risks as their trust in science increases?

It is almost universally assumed that political polarization over societal risks like climate change originate in different levels of trust in scientists: left-leaning people believe in human-caused climate change, it is said, because they have a greater degree of confidence in scientists; so-called “conservative Republicans," in contrast, are said distrust of science and scientists and thus are predisposed to climate skepticism.

But is this right? Or are we looking at another form of the dreaded WEKS disease?

Well, here’s a simple test based on GSS data.

Using the 2010 & 2016 datasets (the only years in which the survey included the climate-risk outcome variable), I cobbled together a decent “trust in science” scale:

scibnfts5: “People have frequently noted that scientific research has produced benefits and harmful results. Would you say that, on balance, the benefits of scientific research have outweighed the harmful results, or have the harmful results of scientific research been greater than its benefits?” [5 pt: strongly in favor beneficial . . .strongly in favor of harmful results.”)

consci: “As far as the people running [the science community] are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them,”

scientgo: “Scientific researchers are dedicated people who work for the good of humanity.” [4 points: strongly agree . . . strongly disagree)

scienthe: “Scientists are helping to solve challenging problems.” [4 points: strongly agree . . . strongly disagree)

nextgen: “Because of science and technology, there will be more opportunities for the next generation” [4 points: strongly agree . . . strongly disagree”]

advfont.  “Even if it brings no immediate benefits, scientific research that advances the frontiers of knowledge is necessary and should be supported by the federal government.” [4 points: strongly agree . . . strongly disagree”]

scientbe. “Most scientists want to work on things that will make life better for the average person.” [4 points: strongly agree . . . strongly disagree”]

These items formed a single factor and had a Cronbach’s α score of 0.72.  Not bad. I also reverse coded as necessary so that for every item a higher score would denote more rather than less trust of science.

Surprisingly, the GSS has never had a particularly good set of climate-change “belief” and risk perception items. Nevertheless, they have sometimes fielded this question: 

TEMPGEN: “In general, do you think that a rise in the world's temperature caused by the `greenhouse effect', is exptremely dangers for the evironment . . . not dangerous at all for the environment?” [5 points: “exptremely dangers for the evironment . . . not dangerous at all for the environment?”]

I don’t love this item but it is a cousin of the revered Industrial Strength Risk Perception Measure, so I decided I’d give it a whirl. 

I then did some regressions (after of course, eyeballing the raw data).

In the first model, I regressed a reverse-coded TEMPGEN on the science-trust scale and “left_right,” a composite political outlook scale formed by aggregating the study participants’ self- (α= 0.66 ).  As expected, higher scores on the science-trust scale predicted responses of “very dangerous” and “extremely dangers,” while left_right predicted responses of “not very dangerous” and “not dangerous at all.”

If one stops there, the result is an affirmation of  the common wisdom.  Both political outlooks and trust in science have the signs one would expect, and if one were to add their coefficients, one could make claims about how much more likely relatively conservative respondents would be to see greater risk if only they could be made to trust science more.

But this form of analysis is incomplete.  In particular, it assumes that the contribution trust in science and left_right make to perceptions of the danger of climate change are (once their covariance is partialed out) independent and linear and hence additive.

But why assume that trust in science has the same effect regardless of respondents’ ideologies? After all, we know that science comprehension’s impact on perceived climate-change risks varies in relation to ideology, magnifying polarization.  Shouldn’t we at least check to see if there is a comparable  interaction between political outlooks and trust?

So I created a cross-product interaction term and added it to form another regression model.  And sure enough, there was an interaction, one predicting in particular that we ought to expect even more partisan polarization as right- and left-leaning individuals' scores on the trust-in-science scale increased.

Here’s what the interaction looks like:


Geez!  Higher trust promotes greater risk concern for left-leaning respondents but has essentially no effect whatsoever on right-leaning ones.

What to say?...

Well one possibility that occurs to me is based on biased perceptions of scientific consensus.  Experimental data suggest that ordinary persons of diverse outlooks are more likely to notice, assign significance to, and recall instances in which a scientist  took the position consistent with their cultural group's than ones in which a scientist took the opposing position.  As a result, people end up with mental inventories of expert opinion skewed toward the position that predominates in their group. If that's how they perceive the weight of expert opinion, why would they distrust scientists?

But I dunno. This is just post hoc speculation.

Tell me what you think the answer is – and better still, how one could design an experiment to test your favored conjecture against whatever you think the second most likely answer is.

Tuesday
Apr182017

Last session in Science of Science Communication 2017

Not usually where we end, but frolicks & detours along the way were worthwhile

Thursday
Apr062017

*Now* where am I? Oklahoma City!

Am heading out early (today) to see what cool things the researchers at OU Center for Risk and Crisis Management are up to!

Will send postcards.

Friday
Jun032016

What does "believing/disbelieving in" add to what one knows is known by science? ... a fragment

From something I'm working on (and related to "yesterday's" post) . . .

4.3. “Believing in” what one knows is known by science

People who use their reason to form identity-expressive beliefs can also use it to acquire and reveal knowledge of what science knows. A bright “evolution disbelieving” high school student intent on being admitted to an undergraduate veterinary program, for example, might readily get a perfect score on an Advanced Placement biology exam (Herman 2012).

It’s tempting, of course, to say that the “knowledge” one evinces in a standardized science test is analytically independent of one's “belief” in the propositions that one “knows.”  This claim isn’t necessarily wrong, but it is highly likely to reflect confusion.  

Imagine a test-taker who says, “I know science’s position on the natural history of human beings: that they evolved from an earlier species of animal. And I’ll tell you something else: I believe it, too.”  What exactly is added by that person’s profession of belief?

The answer “his assent to a factual proposition about the origin of our species” reflects confusion. There is no plausible psychological picture of the contents of the human mind that sees it as containing a belief registry stocked with bare empirical propositions set to “on-off,” or even probabilistic “pr=0.x,” states.  Minds consist of routines—clusters of affective orientations, conscious evaluations, desires, recollections, inferential abilities, and the like—suited for doing things.  Beliefs are elements of such clusters. They are usefully understood as action-enabling states—affective stances toward factual propositions that reliably summon the mental routine geared toward acting in some way that depends on the truth of those propositions (Peirce 1877; Braithwaite 1933, 1946; Hetherington 2011)

In the case of our imagined test-taker, a mental state answering to exactly this description contributed to his supplying the correct response to the assessment item.  If that’s the mental object the test-taker had in mind when he said, “and I believe it, too!,” then his profession of belief furnished no insight into the contents of his mind that we didn’t already have by virtue of his answering the question correctly. So “nothing” is one plausible answer to the question what did it add when he told us he “believed” in evolution.

It’s possible, though, that the statement did add something.  But for the reasons just set forth, the added information would have to relate to some additional action that is enabled by his holding such a belief. One such thing enabled by belief in evolution is being a particular kind of person.  Assent to science’s account of the natural history of human beings has a social meaning that marks a person out has holding certain sorts of attitudes and commitments; a belief in evolution reliably summons behavior evincing such assent on occasions in which a person has a stake in experiencing that identity or enabling others to discern that he does.

Indeed, for the overwhelming majority of people who believe in evolution, having that sort of identity is the only thing they are conveying to us when they profess their belief. They certainly aren’t revealing to us that they possess the mental capacities and motivations necessary to answer even a basic high-school biology exam question on evolution correctly: there is zero correlation between professions of belief and even a rudimentary understanding of random mutation, natural variance, and natural selection (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990).

Precisely because one test-taker’s profession of “belief” adds nothing to any assessment of knowledge of what science knows, another's profession of “disbelief” doesn’t subtract anything.  One who correctly answers the exam question has evinced not only knowledge but also her possession of the mental capacities and motivations necessary to convey such knowledge

When a test-taker says “I know what science thinks about the natural history of human beings—but you better realize, I don’t believe it,” then it is pretty obvious what she is doing: expressing her identity as a member of a community for whom disbelief is a defining attribute. The very occasion for doing so might well be that she was put in a position where revealing of her knowledge of what science knows generated doubt about who she is

But it remains the case that the mental states and motivations that she used to learn and convey what science knows, on the one hand, and the mental states and motivations she is using to experience a particular cultural identity, on the other, are entirely different things (Everhart & Hameed 2013; cf. DiSessa 1982).  Neither tells us whether she will use what evolution knows to do other things that can be done only with such knowledge—like become a veterinarian, say, or enjoy a science documentary on evolution (CCP 2016). To figure out if she believes in evolution for those purposes—despite her not believing in it to be who she is—we’d have to observe what she does in the former settings.

All of these same points apply to the response that study subjects give when they respond to a valid measure of their comprehension of climate science.  That is, their professions of “belief” and “disbelief” in the propositions that figure in the assessment items neither add to nor subtract from the inference that they have (or don’t have) the capacities and motivations necessary to answer the question correctly.  Their respective professions  tell us only who they are. 

As expressions of their identities, moreover, their respective professions of “belief” and “disbelief” don’t tell us anything about whether they possess the “beliefs” in human-caused climate change requisite to action informed by what science knows. To figure out if a climate change “skeptic” possesses the action-enabling belief in climate change that figures, say, in using scientific knowledge to protect herself from the harm of human-caused climate change, or in voting for a member of Congress (Republican or Democrat) who will in fact expend even one ounce of political capital pursuing climate-change mitigation policies, we must observe what that skeptical individual does in those settings.  Likewise, only by seeing what a self-proclaimed climate-change believer does in those same settings can we see if he possess the sort of action-enabling belief in human-caused climate change that using science knowledge for those purposes depends on.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

CCP, Evidence-based Science Filmmaking Inititive, Study No. 1 (2016)

Demastes, S.S., Settlage, J. & Good, R. Students' conceptions of natural selection and its role in evolution: Cases of replication and comparison. Journal of Research in Science Teaching 32, 535-550 (1995).

DiSessa, A.A. Unlearning Aristotelian Physics: A Study of Knowledge‐Based Learning*. Cognitive science 6, 37-75 (1982).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).

Peirce, C.S. The Fixaation of Belief. Popular Science Monthly 12, 1-15 (1877).
Tuesday
May312016

"According to climate scientists ..." -- WTF?! (presentation summary, slides)

Gave talk at the annual Association for Psychological Science on Sat.  Was on a panel that featured great presentations by Leaf Van Boven, Rick Larrick & Ed O'Brien. Maybe I'll be able to induce them to do short guest posts on their presentations, although understandably, they might be shy about become instant world-wide celebrities by introducing their work to this sites 14 bilion readers.

Anyway, my talk was on the perplexing, paradoxical effect of "according to climate scientists" or ACS prefix (slides here).

As 6 billion of the readers of this blog know-- the other 8 have by now forgotten b/c of all the other cool things that have been featured on the blog since the last time I mentioned this--attributing positions on the contribution of human beings to global warming, and the consequences thereof, to "climate scientists" magically dispels polarization on responses to cliimate science literacy questions.

Here's what happens when "test takers" (members of a large, nationally representative sample) respond to two such items that lack the magic ACS prefix:

 
Now, compare what happens with the ACS prefix:

 
Does this make sense?

Sure. Questions that solicit respondents’ understanding of what scientists believe about the causes and consequences of human-caused global warming avoid forcing individuals to choose between answers that reveal what they know about what science knows, on the one hand, and ones that express who they are as members of cultural groups, on the other.

Here's a cool ACS prefix corollary:

Notice that the "Nuclear power" question was a lot "harder" than the "Flooding" one once the ACS prefix nuked (as it were) the identity-knowledge confound.  Not surprisingly, only respondents who scored the highest on the Ordinary Science Intelligence assessment were likely to get it right.

But notice too that those same respondents--the ones highest in OSI--were also the most likely to furnish the incorrect identity-expressive responses when the ACS prefix was removed.

Of course! They are the best at supplying both identity-expressive and  science-knowledge-revealing answers.  Which one they supply depends on what they are doing: revealing what they know or being who they are. 

The ACS prefix is the switch that determines which of those things they use their reason for.

Okay but what about this: do rspts of opposing political ordinations agree on whether climate scientists agree on whether human-caused climate change is happening?

Of course not!

 
In modern liberal democratic societies, holding beliefs contrary to the best available scientific evidence is universally understood to be a sign of stupidity. The cultural cogniton of scientific consensus describes the psychic pressure that members of all cultural groups experience, then, to form and persist in the belief that their group’s position on a culturally contested issue is consistent with the best avaialbel scientific evidence.

But that's what creates the "WTF moment"-- also known as a "paradox":


Um ... I dunno!

That's what I asked the participants--my fellow panelists and the audience members (there were only about 50,000 people, because were scheduled against some other pretty cool panels) to help me figure out!

They had lots of good conjectures.

How about you?

Friday
Apr222016

Another “Scraredy-cat risk disposition”™ scale "booster shot": Childhood vaccine risk perceptions

You saw this coming I bet.

I would have presented this info in "yesterday's" post but I'm mindful of the groundswell of anxiety over the number of anti-BS inoculations that are being packed into a single data-based booster shot, so I thought I'd space these ones out.

"Yesterday," of course, I introduced the new CCP/Annenberg Public Policy Center “Scaredy-cat risk disposition”™ measure.  I used it to help remind people that the constant din about "public conflict" over GM food risks--and in particular that GM food risks are politically polarizing-- is in fact just bull shit.  

The usual course of treatment to immunize people against such bull shit is just to show that it's bull shit.  That goes something  like this:

 

The  “Scraredy-cat risk disposition”™  scale tries to stimulate people’s bull shit immune systems by a different strategy. 

Rather than showing that there isn’t a correlation between GM food risks and any cultural disposition of consequence (political orientation is just one way to get at the group-based affinities that inform people’s identities; religiosity, cultural worldviews, etc.,  are others—they all show the same thing w/r/t GM food risk perceptions), the  “Scraredy-cat risk disposition”™ scale shows that there is a correlation between it and how afraid people (i.e, the 75%-plus part of the population that has no idea what they are being asked about when someone says, “are GM foods safe to eat, in your opinion?”) say they are of GM foods and how afraid they are of all sorts of random ass things (sorry for technical jargon) including,

  • Mass shootings in public places

  • Armed carjacking (theft of occupied vehicle by person brandishing weapon)

  • Accidents occurring in the workplace

  • Flying on a commercial airliner

  • Elevator crashes in high-rise buildings

  • drowning of children in swimming pools

A scale comprising these ISRPM items actually coheres!

But what a high score on it measures, in my view, is a not a real-world disposition but a survey-artifact one that reflects a tendency (not a particularly strong one but one that really is there) to say “ooooo, I’m really afraid of that” in relation to anything a researcher asks about.

The “Scraredy-cat risk disposition”™  scale “explains” GM food risk perceptions the same way, then, that it explains everything,

which is to say that it doesn’t explain anything real at all.

So here’s a nice Bull Shit test.

If variation in public risk perceptions are explained just as well or better by scores on the “Scraredy-cat risk disposition”™  scale than by identity-defining outlooks & other real-world characteristics known to be meaningfully related to variance in public perceptions of risk, then we should doubt that there really is any meaningful real-world variance to explain. 

Whatever variance is being picked up by these legitimate measures is no more meaningful than the variance picked up by a randm-ass noise detector. 

Necessarily, then whatever shred of variance they pick up, even if "statistically significant" (something that is in fact of no inferential consequence!) cannot bear the weight of sweeping claims about who— “dogmatic right wing authoritarians,” “spoiled limousine liberals,” “whole foodies,” “the right,” “people who are easily disgusted” (stay tuned. . .), “space aliens posing as humans”—etc. that commentators trot out to explain a conflict that exists only in “commentary” and not “real world” space.

Well, guess what? The “Scraredy-cat risk disposition”™  scale “explains” childhood vaccine risk perceptions as well as or better than the various dispositions people say “explain” "public conflict" over that risk too.

Indeed, it "explains" vaccine-risk perceptions as well (which is to say very modestly) as it explains global warming risk percepitons and GM food risk perceptions--and any other goddam thing you throw at it.

See how this bull-shit immunity booster shot works?

The next time some know it all says, "The rising tide of anti-vax sentiment is being driven by ... [fill in bull shit blank]," you say, "well actually, the people responsible for this epidemic of mass hysteria are the ones who are worried about falling down elevator shafts, being the victim of a carjacking [how 1980s!], getting flattened by the detached horizontal stabilizer of a crashing commercial airliner, being mowed down in a mass shooting, getting their tie caught in the office shredder, etc-- you know those guys!  Data prove it!"

It's both true & absurd.  Because the claim that there is meaningful public division over vaccine risks is truly absurd: people who are concerned about vaccines are outliers in every single meaningful cutlural group in the U.S.

Click to see "falling" US vaccination rates...Remember, we have had 90%-plus vaccinate rates on all childhood immunizations for well over a decade.

Publication of the stupid Wakefield article had a measurable impact on vaccine behavior in the UK and maybe elsewhere (hard to say, b/c on the continent in Europe vaccine rates have not been as high historically anyway), but not the US!  That’s great news!

In addition, valid opinion studies find that the vast majority of Americans of all cultural outllooks (religious, political, cultural, professional-sports team allegiance, you name it) think childhood vaccines are the greatest invention since . . . sliced GM bread!  (Actually, wheat farmers, as I understand it, don’t use GMOs b/c if they did they couldn’t export grain to Europe, where there is genuine public conflict over GM foods).

Yes, we do have pockets of vaccine-hesitancy and yes they are a public health problem.

But general-population surveys and experiments are useless for that—and indeed a wast of money and attention.  They aren't examining the right people (parents of kids in the age range for universal vaccination).  And they aren't using measures that genuine predict the behavior of interest.

We should be developing (and supporting researchers doing the developing of) behaviorally validated methods for screening potentially vaccine  hesitant parents and coming up with risk-counseling profiles speciifically fitted to them.

And for sure we should be denouncing bull shit claims—ones typically tinged with group recrimination—about who is causing the “public health crisis” associated with “falling vaccine rates” & the imminent “collapse of herd immunity,” conditions that simply don’t exist. 

Those claims are harmful because they inject "pollution" into the science communication environment including  confusion about what other “ordinary people like me” think, and also potential associations between positions that genuinely divide people—like belief in evolution and positions on climate change—and views on vaccines. If those take hold, then yes, we really will have a fucking crisis on our hands.

If you are emitting this sort of pollution, please just stop already!

And the rest of you, line up for a  “Scraredy-cat risk disposition”™  scale booster shot against this bull shit. 

It won’t hurt, I promise!  And it will not only protect you from being misinformed but will benefit all the rest of us too by helping to make our political discourse less hospitable to thoughtless, reckless claims that can in fact disrupt the normal processes by which free, reasoning citizens of diverse cultural outlooks converge on the best available evidence.

On the way out, you can pick up one of these fashionable “I’ve been immunized by  the ‘Scraredy-cat risk disposition’™  scale against evidence-free bullshit risk perception just-so stories” buttons and wear it with pride!


Saturday
Apr022016

Weekend update: Priceless


 

Saturday
Mar262016

Weekend update: modeling the impact of the "according to climate scientists prefix" on identity-expressive vs. science-knowledge revealing responses to climate science literacy items

I did some analyses to help address issues that arose in an interesting discussion with @dypoon about how to interpret the locally weighted regression outputs featured in "yesterday's" post. 

Basically, the question is what to make of the respondents at the very highest levels of Ordinary Science Intelligence

When the prefix "according to climate scientists" is appended to the items, those individuals are the most likely to get the "correct" response, regardless of their political outlooks. That's clear enough.

It's also bright & clear that when the prefix is removed, subjects at all levels of OSI are more disposed to select the identity-expressive answer, whether right or wrong. 

What's more those highest in OSI seem even more disposed to select the identity-expressive "wrong" answer than those modest in that ability.  Insfar as they are the ones most capable of getting the right answer when the prefix is appended, they necessarily evince the strongest tendency to substitute the incorrect identity-expressive for the correct, science-knowledge-evincing response when the prefix is removed.

But are those who are at the very tippy top of the OSI hierarchy resisting the impulse (or the consciously perceived opportunity) to respond in an identity-protective manner--by selecting the incorrect but ideologically congenial answer-- when the prefix is removed?  Is that what the little little upward curls mean at the far right end of the dashed line for right-leaning subjects in "flooding" and for left-leaning ones in "nuclear"?

@Dypoon seems to think so; I don't.  He/she sensed signal; I caught the distinct scent of noise.

Well, one way to try to sort this out is by modeling the data.

The locally weighted regression just tells us the mean probabilities of "correct" answers at tiny little increments of OSI. A logistic regression model can show us how the precision of the estimated means--the information we need to try to ferret out signal from noise-- is affected by the number of observations, which necessarily get smaller as one approaches the upper end of the Ordinary Science Intelligence scale.

Here are a couple of ways to graphically display the models (nuclear & flooding). 

This one plots the predicted probability of correctly answering the items with and without the prefix for subjects with the specified political orientations as their OSI scores increase: 

 

This one illustrates, again in relation to OSI, how much more likely someone is to select the incorrect, identity-expressive response for the no-prefix version than he or she is to select the incorrect response for the prefix version:

The graphic shows us just how much the confounding of identity and knowledge in a survey item can distort measurement of how likely an individual is to know climate-science propositions that run contrary to his or her ideological predisposition on global warming.

I think the results are ... interesting.

What do you think?

To avoid discussion forking (the second leading cause of microcephaly in the Neterhlands Antilles), I'm closing off comments here.  Say your piece in the thread for "yesterday's" post.

Monday
Dec282015

Replicate "Climate-Science Communication Measurement Problem"? No sweat (despite hottest yr on record), thanks to Pew Research Center!

One of the great things about Pew Research Center is that it posts all (or nearly all!) the data from its public opinion studies.  That makes it possible for curious & reflective people to do their own analyses and augment the insight contained in Pew's own research reports. 

I've been playing around with the "public" portion of the "public vs. scientists" study, which was issued last January (Pew 2015). Actually Pew hasn't released the "scientist" (or more accurately, AAAS membership) portion of the data. I hope they do!

But one thing I thought it would be interesting to do for now would be to see if I could replicate the essential finding from "The Climate Science Communication Measurement Problem" (2015)

In that paper, I presented data suggesting, first, that neither "belief" in evolution nor "belief" in human-caused climate change were measures of general science literacy.  Rather both were better understood as measures of forms of "cultural identity" indicated, respectively, by items relating to religiosity and items relating to left-right political outlooks.

Second, and more importantly, I presented data suggesting hat there is no relationship between "belief" in human-caused climate change & climate science comprehension in particular. On the contrary, the higher individuals scored on a valid climate science comprehension measure (one specifically designed to avoid the confound between identity and knowledge that confounds most "climate science literacy" measures), the more polarized the respondents were on "belief" in AGW--which, again, is best understood as simply an indicator of "who one is," culturally speaking.

Well, it turns out one can see the same patterns, very clearly, in the Pew data.

Patterned on the NSF Indicators "basic facts" science literacy test (indeed, "lasers" is an NSF item), the Pew battery consists of six items:

As I've explained before, I'm not a huge fan of the "basic facts" approach to measuring public science comprehension. In my view, items like these aren't well-suited for measuring what a public science comprehension assessment ought to be measuring: a basic capacity to recognize and give proper effect to valid scientific evidence relevant to the things that ordinary people do in their ordinary lives as consumers, workforce members, and citizens.

One would expect a person with that capacity to have become familiar with certain basic scientific insights (earth goes round sun, etc.) certainly.  But certifying that she has stocked her "basic fact" inventory with any particular set of such propositions doesn't give us much reason to believe that she possesses the reasoning proficiencies & dispositions needed to augment her store of knowledge and to appropriately use what she learns in her everyday life.

For that, I believe, a public science comprehension battery needs at least a modest complement of scientific-thinking measures, ones that attest to a respondent's ability to tell the difference between valid and invalid forms of evidence and to draw sound inferences from the former.  The "Ordinary Science Intelligence" battery, used in the Measurement Problem paper, includes "cognitive reflection" and "numeracy"modules for this purpose.

Indeed, Pew has presented a research report on a more fulsome science comprehension battery that might be better in this regard, but it hasn't released the underlying data for that one.

Psychometric properties of Pew science literacy battery--click on it, c'mon!But anyway, the new items that Pew included in its battery are more current & subtle than the familiar Indicator items, & the six-member Pew group form a reasonably reliable (α = 0.67), one dimensional scale-- suggesting they are indeed measuring some sort of science-related apptitude.

But the fun stuff starts when one examines how the resulting Pew science literacy scale relates to items on evolution, climate change, political outlooks, and religiosity.

For evolution, Pew used it's two-part question, which first asks whether the respondent believes (1) "Humans and other living things have evolved over time" or (2) "Humans and other living things have existed in their present form since the beginning of time." 

Subjects who pick (1) then are asked whether (3) "Humans and other living things have evolved due to natural processes such as natural selection" or (4) "A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today."

Basically, subjects who select (2) are "new earth creationists." Subjects who select (4) are generally regarded as believing in "theistic evolution."  Intelligent design isn't the only variant of "theistic evolution," but it is certainly one of the accounts that fit this account.

Only subjects who select (3)-- "humans and other living things have evolved due to natural processes such as natural selection" -- are the only ones furnishing the response that reflects science's account of the natural history of humans. 

So I created a variable, "evolution_c," that reflects this answer, which was in fact selected by only 35% of the subjects in Pew's U.S. general public sample.

On climate change, Pew assessed (using two items that tested for item order/structure effects that turned out not to matter) whether subjects believed (1) "the earth is getting warmer mostly because of natural patterns in the earth’s environment," (2) "the earth is getting warmer mostly because of human activity such as burning fossil fuels," or (3) "there is no solid evidence that the earth is getting warmer."

About 50% of the respondents selected (2).  I created a variable, gw_c, to reflect whether respondents selected that response or one of the other two.

For political orientations, I combined the subjects responses to a 5-point liberal-conservative ideology item and their responses to a 5-point partisan self-identification item (1 "Democrat"; 2 "Independent leans Democrat"; 3 "Independent"; 4 "Independent leans Republican"; and 5 "Republican").  The composite scale had modest reliability (α = 0.61).

For religiosity, I combined two items.  One was a standard Pew item on church attendance. The other was a dummy variable, "nonrelig," scored "1" for subjects who said they were either "atheists," "agnostics" or "nothing in particular" in response to a religious-denomination item (α = 0.66).

But the very first thing I did was toss all of these items -- the 6 "science literacy" ones, belief in evolution (evolution_c), belief in human-caused climate change (gw_c), ideology, partisan self-identification, church attendance, and nonreligiosity--into a factor analysis (one based on a polychoric covariance matrix, which is appropriate for mixed dichotomous and multi-response likert items).

Click for closer look-- if you dare....

Not surprisingly, the covariance structure was best accounted for by three latent factors: one for science literacy, one for political orientations, and one for religiosity.

But the most important result was that neither belief in evolution nor belief in human-caused climate change loaded on the "science literacy" factor.  Instead they loaded on the religiosity and right-left political orientation factors, respectively.

This analysis, which replicated results from a paper dedicated solely to examinging the properties of the Ordinary Science Intelligence test, supports the inference that belief in evolution and belief in climate Warning: Click only if psychologically prepared to see shocking cultural bias in "belief in evolution" as science literacy assessment item! change are not indicators of "science comprehension" but rather indicators of cultural identity, as manifested respectively by political outlooks and religiosity.

To test this inference further, I used "differential item function" or "DIF" analysis (Osterlind & Everson, 2009).

Based on item response theory, DIF examines whether a test item is "culturally biased"--not in an animus sense but a measurement one: the question is whether the responses to the item measure the "same" latent proficiency (here, science literacy) in diverse groups.  If it doesn't-- if there is a difference in the probability that members of the two groups who have equivalent science literacy scores will answer it "correctly"--then administering that question to members of both will result in a biased measurement of their respective levels of that proficiency.

In Measurement Problem, I used DIF analysis to show that belief in evolution is "biased" against individuals who are high in religioisity. 

Using the Pew data (regression models here), one can see the same bias:

The latter but not the former are likely to indicate acceptance of science's account of the natural history of humans as their science literacy scores increase. This isn't so for other items in the Pew science literacy battery (which here is scored used using an item response theory model; the mean is 0, and units are standard deviations). 

The obvious conclusion is that the evolution item isn't measuring the same thing in subjects who are relatively religious and nonreligious as are the other items in the Pew science literacy battery. 

In Measurement Problem, I also used DIF to show that belief in climate change is a biased (and hence invalid) measure of climate science literacy.  That analysis, though, assessed responses to a "belief in Warning: Graphic demonstration of cultural bias in standardized assessment item. Click only if 21 yrs or older or accompanied by responsible adult or medical professional.climate change" item (one identical to Pew's) in relation to scores on a general climate-science literacy assessment, the "Ordinary Climate Science Intelligence" (OCSI) assesssment.  Pew's scientist-AAAS study didn't have a climate-science literacy battery.

Its general science literacy battery, however, did have one climate-science item, a question of theirs that in fact I had included in OCSI: "What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it Carbon dioxide, Hydrogen, Helium, or Radon?" (CO2).

Below are the DIF item profiles for CO2 and gw_c (regression models here). Regardless of their political outlooks, subjects become more likely to get CO2 correctly as their science literacy score increases--that makes perfect sense!

But as their science literacy score increases, individuals of diverse political outlooks don't converge on "belief in human caused climate change"; they become more polarized.  That question is measuring who the subjects are, not what they know about about climate science.

So there you go!

I probably will tinker a bit more with these data and will tell you if I find anything else of note.

But in the meantime, I recommend you do the same! The data are out there & free, thanks to Pew.  So reciprocate Pew's contribution to knowledge by analyzing them & reporting what you find out!

References

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Osterlind, S. J., & Everson, H. T. (2009). Differential item functioning. Thousand Oaks, CA: Sage.

Pew Research Center (2015). Public and Scientists' Views on Science and Society.

Thursday
May072015

We are *all* Pakistani Drs/Kentucky Farmers, Part 2: Kant's perspective(s)

This is an excerpt from another bit of correspondence with a group of very talented and reflective scholars who are at the beginning of an important research program to explain "disbelief in" human evolution. In addition, because "we may [must] regard the present state of the universe as the effect of its past and the cause of its future," this post is also a companion to yesterday's, which responded to  Adam Laats' request for less exotic (or less exotic seeming) examples of people using cognitive dualism than furnished us by the Pakistani Dr & the Kentucky Farmer. No doubt it will be the progenitor of "tomorrow's" post too; but you know that will say more about me than it does about the "Big Bang...."

I agree of course that figuring out what people "know" about the rudiments of evolutionary science has to be part of any informative research program here.  But I understand your project to be how to "explain nonacceptance" of or "disbelief in" what is known.

So fine, go ahead and develop valid measures for assessing evolutionary science knowledge. But don't embark on the actual project until you have answered the question the unreflective disregard of which is exactly what has rendered previous “nonacceptance” research programs so utterly unsatifactorywhat is it exactly that is being explained?

Isn't the Pakistani Dr's (or the Kentucky Farmer's or Krista's) "cognitive dualism" just a special instance of the perspectival dualism that Kant understands to be integral to human reason?

In the Groundwork for the Metaphysics of Morals and in both the 1st and 2d Critiques, Kant distinguishes two “self” perspectives: the phenomenal one, in which which we regard ourselves and all other human beings, along with everything else in the universe, to be subjects to immutable and determinstic laws of nature; and the “noumenal” one, in which we regard ourselves (and all other human beings) as possessing an autonomous will that prescribes laws for itself independently of nature so conceived.  

No dummy, Kant obviously can see the "contradictory" stances on human autonomy embodied in the perspectives of our "phenomenal" and "nouemenal" (not to be confused w/ the admittedly closely related "Neumenal") selves.

But he is not troubled by it.

The respective “beliefs” about human autonomy associated with the phenomenal and noumenal perspectives are, for him, built-in components of mental routines that enable the 2 things reasoning beings use their reason for: to acquire knowledge of how the world works; and to live a meaningful life within it.

Because there’s no contradiction between these reason-informed activities, there’s no practical—no experienced, no real -- contradiction between the sets of action-enabling mental states associated with  them.

Obviously, Kant's dualism has a very big point of contact with debates about "free will" & "determinism," and the coherence of "compatibilist" solutions, and whatnot.  

But as I read Kant, his dualism implies these debates are ill-formed. The participants in them are engaging the question whether human beings are subject to deterministic natural laws in a manner that abstracts from from what the answer allows reasoning people to do.

That feature of the "determinism-free will" debate renders it "metaphysical" -- not in the sense Kant had in mind but in the sense sense that logical positivist philosophers did when they tried to clear from the field of science entangling conceptualist underbrush that served no purpose except to trip people up as they tried to advance knowledge by ordered and systematic thinking.

I strongly suspect that those who have dedicated their scholarly energy to "solving" the "problem" of "why the presentation of evolution in class frequently does not achieve acceptance of the evolutionary theory" among students who display comprehension of it are mired in exactly that sort of thicket.

Both the Pakistani Dr and Krista "reject" human evolution in converging with other free, reasoning persons on a particular shared account of what makes life meaningful.  They then both turn around and use evolutionary science (including its applicability to human beings because it simply "doesn't work," they both agree, to exempt human speciation from evolutionary dynamics—just as it doesn't work to exempt human beings from natural necessity generally if one is doing science) when they use their reason to be members of science-trained professions, the practice of which is enabled by evolutionary science.

In behaving in this way, they are doing nothing different from what any scientist or any other human being does in adopting Kant's "phenomenal perspective" to know what science knows about the operation of objects in the world while adopting Kant's "nouemanal one" to live meaningful lives as persons who make judgments of value.  

Only a very remarkable, and disturbing, form of selective perception can explain why so many people find the cognitive dualism of the Pakistani Dr or Krista so peculiar and even offensive.  Their reaction suggests a widespread deficit in the form of civic education needed to equip people to  honor their duty as citizens of a liberal democracy (or as subjects in Kant's "Kingdom of Ends") to respect the choices that other free and reasoning individuals make about how to live.

Is it really surprising, then, that those who have committed themselves to "solving" the chimera of Krista's "nonacceptance problem" can't see the very real problem with a conception of science education that tries to change who people are rather than enlarge what they know?

 

Saturday
Mar072015

Submerged ... 

But will surface in near future -- w/ results of new study ....

Prize for anyone who correctly predicts what it is about; 2 prizes if predict result.

Wednesday
Feb252015

Doh!

Had "comments disabled" for yesterday's post on  Session 7 of virtual "Science of Science Communication 2.0"

Was wondering why there weren't the usual 7,000+ "student" comments!

The sole purpose of this post is to announce that the comments feature for that one is now enabled.  Because that's the logical place for discussion, I'm disabling comments here.

Tuesday
Oct072014

What I believe about teaching "belief in" evolution & climate change

I was corresponding with friend, someone who has done really great science education research, about the related challenges of teaching evolution & climate science to high school students.  

Defending what I've called the "disentanglement principle"-- the obligation of those who are responsible for promoting comprehension of science to create an environment in which free, reasoning people don’t have to choose between knowing what’s known and being who they are-- I stated that I viewed "the whole concept of 'believing' [as] so absurd . . . ."  

He smartly challenged me on this:

I must admit, however, that I do not find the concept of believing to be absurd. I for example, believe that I have been married to the same women since I was XX years old. I also believe that I have XX children. I also believe that the best theory to explain modern day species diversity is Darwin's evolution theory. I do not believe the alternative theory called creationism. Lastly, I believe that the Earth is warming due largely to human caused CO2 emissions. These beliefs are the product of my experience and a careful consideration of the alternatives, their predictions, and a comparison of those prediction and the evidence. This is not a matter of who I am ( for example it matters not whether I am a man or a women, straight or gay, black or white) as much as it is a matter of my understanding of how one comes to a belief in a rational way, and my willingness to not make up my mind, not to form a belief, until all steps of that rational way have been completed to the extent that no reasonable doubt remains regarding the validity of the alternative explanations that have been advanced. 

His response made me realize that I've been doing a poor job in recent attempts to explain why it seems to me that "belief in" evolution & global warming is the wrong focus for imparting and assessing knowledge of those subjects.

I don't think the following reply completely fixes the problem, but here is what I wrote back:

I believe you are right! 

In fact, I generally believe it is very confused and confusing for people to say "X is not a matter of belief; it's a fact ....," something that for some reason seems to strike people as an important point to make in debates about politically controversial matters of science. 

Scientists "believe" things based on evidence, as you say, and presumably view "facts" as merely propositions that happen to be worthy of belief at the moment based on the best available evidence. 

I expressed myself imprecisely, although it might be the case that even when I clarify you'll disagree.  That would be interesting to me & certainly something I'd want to hear and reflect on. 

What I meant to refer to  as "absurd" was the position that treats as an object of science education students' affirmation of "belief in" a fact that has been transformed by cultural status competition into nothing more than an emblem of affiliation. 

That's so in the case of affirmation of "belief in" evolution. To my surprise, actually, I am close to concluding that exactly the same is true at this point of affirmation of "belief in" global warming. 

Those who say they "believe in" climate change are not more likely to know anything about it or about science generally than those who say they don't "believe"-- same as in the case of evolution.  

Saying one "disbelieves" those things, in contrast, is an indicator (not a perfect one, of course) of having a certain cultural identity or style-- one that turns out to be unconnected to a person's capacity to learn anything.  

So those who say that one can gauge anything about the quality of science instruction in the US from the %'s of people who say that they "believe in" evolution or climate change are, in my view, seriously mistaken. 

Or so I believe--very strongly-- based on my current assessment of the best evidence, which includes [a set of extremely important studies] of the the effective teaching of evolution to kids who "don't believe" it.  I'd be hard pressed to identify a book or an article much less a paragraph that conveyed as much to me about the communication of scientific knowledge as this one: 

[E]very teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to [change students’ beliefs], not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern—a pattern that appears to be necessary for independent learning and critical thought.

 Maybe you now have a better sense of what I meant to call "absurd," but now it occurs to me too that "absurd" really doesn't capture the sentiment I meant to express.

It makes me sad to think that some curious student might not get the benefit of knowing what is known to science about the natural history of our (and other) species because his or her teacher made the understandable mistake of tying that benefit to a gesture the only meaning of which for that student in that setting would be a renunciation of his or her identity. 

It makes me angry to think that some curious person might be denied the benefit of knowing what's known by science precisely because an "educator" or "science communicator" who does recognize that affirmation of "belief in" evolution signifies identity & not knowledge nevertheless feels that he or she is entitled to exactract this gesture of self-denigration as an appropriate fee for assisting someone else to learn.

Such a stance is itself a form of sectarianism that is both illiberal and inimical to dissemination of scientific knowledge. 

I have seen that there are teachers who know  the importance of disentangling the opportunity to learn from the necessity to choose sides in a mean cultural status struggle, but who don't know how to do that yet for climate science education.  They want to figure out how to do it; and they of course know that the way to figure it out is to resort to the very forms of disciplined observation, measurement, and inference that are the signatures of science.

I know they will succeed.  And I hope other science communication professionals will pay attention and learn something from them.

Friday
Jun202014

Response: An “externally-valid” approach to consensus messaging

John Cook, science communication scholar and co-author of Quantifying the consensus on anthropogenic global warming in the scientific literature, Environmental Research Letters 8, 024024 (2013), has supplied this thoughtful response to the first of my posts on "messaging consensus." --dmk38

Over the last decade, public opinion about human-caused global warming has shown little change. Why? Dan Kahan suggests cultural cognition is the answer: 

When people are shown evidence relating to what scientists believe about a culturally disputed policy-relevant fact ... they selectively credit or dismiss that evidence depending on whether it is consistent with or inconsistent with their cultural group’s position. 

It’s certainly the case that cultural values influence attitudes towards climate. In fact, not only do cultural values play a large part in our existing beliefs, they also influence how we process new evidence about climate change. But this view is based on lab experiments. Does Kahan’s view that cultural cognition is the whole story work out in the real world? Is that view “externally valid”?

The evidence says no. A 2012 Pew surveys of the general public found that even among liberals, there is low perception of the scientific consensus on human-caused global warming. When Democrats are asked “Do scientists agree earth is getting warmer because of human activity?”, only 58% said yes. There’s a significant "consensus gap” even for those whose cultural values predispose them towards accepting the scientific consensus. A “liberal consensus gap”.

My own data, measuring climate perceptions amongst US representative samples, confirms the liberal consensus gap. The figure below shows what people said in 2013 when asked how many climate scientists agree that humans are causing global warming. The x-axis is a measure of political ideology (specifically, support for free markets). For people on the political right (e.g., more politically conservative), perception of scientific consensus decreases, just as cultural cognition predicts. However, the most relevant feature for this discussion is the perceived consensus on the left.

At the left of the political spectrum, perceived consensus is below 70%. Even those at the far left are not close to correctly perceiving the 97% consensus. Obviously cultural cognition cannot explain the liberal consensus gap. So what can? There are two prime suspects. Information deficit and/or misinformation surplus. 

Kahan suggests that misinformation casting doubt on the consensus is ineffective on liberals. I tend to agree. Data I’ve collected in randomized experiments supports this view. If this is the case, then it would seem information deficit is the driving force behind the liberal consensus gap. It further follows that providing information about the consensus is necessary to close this gap. 

So cultural values and information deficit both contribute to the consensus gap. Kahan himself suggests that science communicators should consider two channels: information content and cultural meaning. Arguing that one must choose between the information deficit model or cultural cognition is a false dichotomy. Both are factors. Ignoring one or the other neglects the full picture. 

But how can there be an information deficit about the consensus? We’ve been communicating the consensus message for years! Experimental research by Stephan Lewandowsky, a recent study by George Mason University and my own research have found that presenting consensus information has a strong effect on perceived consensus. If you bring a participant into the lab, show them the 97% consensus then have them fill out a survey asking what the scientific consensus is, then lo and behold, perception of consensus shoots up dramatically. 

How does this “internally valid” lab research gel with the real-world observation that perceived consensus hasn’t shifted much over the last decade? A clue to the answer lies with a seasoned communicator whose focus is solely on “externally valid” approaches to messaging. To put past efforts at consensus messaging into perspective, reflect on these words of wisdom from Republican strategist and messaging expert Frank Luntz on how to successfully communicate a message: 

“You say it again, and you say it again, and you say it again, and you say it again, and you say it again, and then again and again and again and again, and about the time that you're absolutely sick of saying it is about the time that your target audience has heard it for the first time. And it is so hard, but you've just got to keep repeating, because we hear so many different things -- the noises from outside, the sounds, all the things that are coming into our head, the 200 cable channels and the satellite versus cable, and what we hear from our friends.” 

When it comes to disciplined, persistent messaging, scientists aren’t in the same league as strategists like Frank Luntz. And when it comes to consensus, this is a problem. Frank Luntz is also the guy who said: 

“Voters believe that there is no consensus about global warming in the scientific community.  Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly.  Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate, and defer to scientists and other experts in the field.” 

Luntz advocated casting doubt on the consensus for one simple reason. When people understand that scientists agree that humans are causing global warming, then they’re more likely to support policies to mitigate climate change. Confuse people about consensus, and you delay climate action. 

This finding has subsequently been confirmed by studies in 2011 and 2013. But a decade before social scientists figured it out, Luntz was already putting into place strategies to drum home the “no consensus” myth, with the purpose of reducing public support for climate action. 

Reflecting on the disinformation campaign and the social science research into consensus messaging, Ed Maibach at George Mason University incorporates both the “internally valid” social science research and the “externally valid” approach of Frank Luntz:

We urge scientific organizations to patiently, yet assertively inform the public that, based on the evidence, more than 97% of climate experts are convinced that human-caused climate change is happening. Some scientific organizations may argue that they have already done this through official statements. We applaud them for their efforts to date, yet survey data clearly demonstrate that the message has not yet reached or engaged most Americans. Occasional statements and press releases about the reality of human-caused climate change are unfortunately not enough to cut through the fog—it will take a concerted, ongoing effort to inform Americans about the scientific consensus regarding the realities of climate change.

How do we achieve this? Maibach suggests climate scientists should team up with social scientists and communication professionals. What should scientists be telling the public? Maibach advises:

In media interviews, public presentations, and even neighborhood and family gatherings, climate scientists should remember that many people do not currently understand that there is an overwhelming scientific consensus about human-caused climate change. Tell them, and give them the numbers.

The book Made To Stick looks at “sticky” messages that have caught the attention in the public’s eyes. It runs through many real-world case studies (e.g., externally valid examples) to demonstrate that sticky ideas are simple, concrete, unexpected and tell a story. For a general public who think there is a 50:50 debate among climate scientists, learning that 97% of climate scientists agree that humans are causing global warming ticks many of the sticky boxes.

 

Friday
Mar282014

I ♥ NCAR/UCAR--because they *genuinely* ♥ communicating science (plus lecture slides & video)

Spent a great couple of days at NCAR/UCAR last week, culminating in a lecture on "Communicating Climate Science in a Polluted Science Communication Environment."

Slides here. Also, an amusing video of the talk here—one that consists almost entirely of forlorn-looking lectern.

There are 10^6 great things about NCAR/UCAR, of course.

But the one that really grabbed my attention on this visit is how much the scientists there are committed to the instrinsic value of communicating science. 

They want people —decisionmakers, citizens, curious people, kids (dogs & cats, even; they are definitely a bit crazy!)—to know what they know, to see what they see, because they recognize the unique thrill that comes from contemplating what human beings, employing science’s signature methods of observation and inference, have been able to discern about the hidden workings of nature.

Yes, making use of what science knows is useful—indeed, essential—for individual & collective well-being.

That’s a very good reason, too, to want to communicate science under circumstances in which one has good justification (i.e., a theory consistent with plausible behavioral mechanisms and supported by evidence) to believe that not knowing what’s known is causing people to make bad decisions.

But if you think that “knowing what’s known” is how people manage to align their decisionmaking with the best available evidence in all the domains in which their well-being depends on that; that their “not knowing” is thus the explanation for persistent states of public conflict over the best evidence on matters like climate change or nuclear power or the HPV vaccine; and that communicating what’s known to science is thus the most effective way to dispel such disputes, then you actually have a very very weak grasp of the science of science communication.

And if you think, too, that what I just wrote implies there is “no point” in enabling people to know, then you have just revealed that you are merely posing—to others, & likely even to yourself!—when you claim to care about science communication and science education.

I spent hours exchanging ideas with NCAR scientists--including ideas about how to use empirical evidence to perfect climate-science communication--and not even for one second did I feel I was talking to someone like that.

 

 

 

Monday
May282012

"How confident should we be ..."

A thoughtful journalist asks in relation to our  Nature Climate Change  study:

It would be really helpful to get your reflection on the research.   In particular, I'm interested in the polarising effect you were able to identify. From the figure (Fig.2) this appears to be quite subtle, albeit in the opposite direction to that which was predicted by the SCT thesis.   It would be great if you could identify to what extent/how confident we can be to say that increasing numeracy and literacy polarises risk perception about climate change, and what can explain this polarisation.

This was such a thoughtful way of putting the question, I felt impelled -- only in part by OCD; one shouldn't ask a good question if one wants an imprecise, casual response -- to give a reasonably precise & detailed answer:

1.  All  study results are provisional. That's in the nature of science. Valid studies give you more evidence than you otherwise would have had to believe something. They never "settle" the issue; one continues to revise one's assessment of what to treat as true and how likely it is not to be as more valid studies, more valid evidence, accumulates. Forever & ever (Popper 1962).

So it is never sensible (it is a misundersanding of the nature of empirical proof) to say, "this study proves this" or "this study doesn't necessailry prove that" etc. Instead it is very sensible to ask, as you have, "how confident should we be" in a particular conclusion given the evidence presented in a particular study.  

2. As you know, our study investigated two hypotheses: the science comprehension thesis (SCT), which attributes public conflict over climate change to deficits in science comprehension; and the cultural cognition thesis (CCT), which asserts that conflict over climate change is a consequence of the unconscious tendency of individuals to fit the beliefs about risk to positions that dominate in their group, and which in its strongest form would say that this tendency will be reinforced or magnified by grater science comprehension, which can be used to promote such fitting. 

3. The study furnishes relatively strong  evidence that SCT is incorrect. SCT would predict that cultural polarization abates as science comprehension increases. Even if we had found that the impact of science comprehension on cultural polarization was  nil, the study would supply the basis for a high degree of confidence that public conflict over climate change is not a consequence of low science comprehension. 

4. The study is consistent with CCT and furnishes modest  evidence that CCT in its strongest form is correct. That position would predict that cultural polarization will be greater among individuals with the greatest science comprehension. The results fit that hypothesis-- on both climate change & nuclear power risks; the latter helps to furnish more reason to think that the effect is genuine one for climate change.

But I'd say only modest evidence mainly because of the design of the study. It's observational --correlational -- only. Observed correlations that fit a hypothesis supply supporting evidence in proportion to which they rule out other explanations. Maybe something else is going on that causes both increased science comprehension & increased polarization in certain people. The only way to tell is through (well designed) experiments. We are conducting some now. 

5.    You note the effect size of the interaction is modest. Maybe; it's hard to know how to characterize such things in the abstract (and realize, too, that polarization is so great even for low-comprehending respondents that it would be hard for it to grow much for high-comprehending respondents!).

The size of the interaction effect we observed is probably about what you would expect for an observational study, and if the source of the effect is CCT, it should be easy to produce much more dramatic effects through properly designed experiments   (Cohen, Cohen, Aiken & West 2003, pp. 297-98). So rather than try to extract more information from the effect size about how confident or not to be in the strong CCT position, it makes sense to do experiments. Again that's what we are now doing.

6. By itself, then, the study furnishes only modest reason to be confident in CCT (in its strongest form) relative to other possibilities (one has to be able to identify such possibilities, of course, in order to have any reason to doubt CCT; I can think of possibilities, certainly). I myself am more than modestly confident -- but only because this study is not the only thing I count as evidence that (strong) CCT is correct.

7. An aside: Nothing in our study suggests that making people more science literate or numerate  causes  polarization. If CCT is correct, there is something about climate change (and certain other issues) that makes people try to maximize the fit between their beliefs and positions that predominate within their groups, which themselves are impelled into opposing stances on certain facts. That thing is the cause in the practical, normative sense. We should find it and get rid of it.

references:

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Mahwah, N.J.: L. Erlbaum Associates.

Popper, K. R. (1962). Conjectures and refutations; the growth of scientific knowledge. New York,: Basic Books.

 

Monday
Jan092012

New CCP geoengineering study

New study/paper, hot off the press:

 

Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment

Abstract
We conducted a two-nation study (United States, n = 1500; England, n = 1500) to test a novel theory of science communication. The cultural cognition thesis posits that individuals make extensive reliance on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests the potential value of a distinctive two-channel science communication strategy that combines information content (“Channel 1”) with cultural meanings (“Channel 2”) selected to promote open-minded assessment of information across diverse communities. In the study, scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. Consistent with the study hypotheses, we found that making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions helps to offset cultural polarization over the validity of climate-change science. We also tested the hypothesis, derived from competing models of science communication, that exposure to information on geoengineering would provoke discounting of climate-change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.

Tuesday
May242011

Prison Overcrowding, Recidivism & Crime

I was one of many, many experts contributing to briefs to the Supreme Court on this case.  In a 5-4 decision, the Court upheld a decision to require California to reduce the number of prisoners to a number that the state itself deemed safe for inmates.  Part of the Supreme Court's calculus involved weighing potential risks and benefits to public safety involved.  The majority cited expert testimony (based on numerous studies) that lowering prison populations may, on net, enhance public safety.   

Monday
Dec142009

NYT Sunday Magazine Calls for End to Cognitive Illiberalism

We were delighted to discover that the CCP's study of the Supreme Court's decision in Scott v. Harris made it into New York Times Sunday Magazines Ninth Annual Year in Ideas (standard for selection: "the most clever, important, silly and just plain weird innovations..."). It was especially fitting to share that honor with the Ruppy, the glow-in-the-dark dog, public fears of whom are being investigated in CCP's synthetic biology risk perception project.

Friday
Aug072009

Easterbrook on Climate Change

Steve Easterbrook has a thoughtful post on his blog about cultural cognition and climate change.  In the comments on his post, one of his readers describe a common problem that lay citizens often ascribe to experts: sometimes experts "lie for hire."  It's true that members of the public may worry about that, but it's also true that individuals selectively attend to this particular risk when evaluating expert opinion.  Dan is in the middle of a study in which he demonstrates precisely this phenomenon, and I'm sure he'll be posting about it here soon!