follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Where am I? version 502

Two events this week:



Last session in Science of Science Communication 2017

Not usually where we end, but frolicks & detours along the way were worthwhile


Another genuinely informative study of consensus messaging



Science of Science Communication seminar: Session 10 reading list (teaching climate change)

More Science of #Scicomm . . .


Using science curiosity ... a fragment

From something I'm working on . . . 

. . . Taken together, these studies suggest that science curiosity ought to be viewed as a signal virtue of democratic citizenship in a culturally diverse society.  The information-processing style of these citizens ought to be propagated and extended as an antidote to the enfeebling impact of group rivalries on citizens’ capacity to identify valid science....

a. [A program to employ science curiosity for purposes of enlightened self-government must answer three questions.] First, how can the stock of citizens who are curious about science be enlarged? Presumably, this disposition forms at a relatively young age.  We thus anticipate that this part of the research program will focus on the development of primary-, middle-, and high-school education materials suited to instill curiosity in students.  To date, efforts to develop such materials have met with little success, primarily because educators have not been equipped with reliable and valid measures to test the impact of various pedagogical strategies aimed at cultivating science curiosity (Blalock et al. 2008).  The APPC/CCP Science Curiosity Scale does furnish a valid and reliable measure for adults, and we are currently engaged in exploratory work to develop a version of the scale that can be used for middle-school students.

b. Second, how can the dispositions of the most science-curious citizens be leveraged to promote more productive engagement with decision-relevant science in our political discourse? Field studies conducted by CCP suggests that members of culturally diverse groups display greater open-mindedness when they observe trusted group members evincing confidence in the validity of decision-relevant science by their actions and words.  To multiply the number of such interactions, it makes sense for communicators to seed culturally diverse groups with members who have already formed positive views of decision-relevant science (Kahan 2015). . . .

c. Third, how can the “frontier” of science curiosity be moved back when communicators engage with ordinary citizens?  Individuals  tend to spontaneously and aggressively resist information that challenges positions associated with their group.  The appetite for surprise and wonder associated with science curiosity, in contrast, effectively stifles that form of defensive information processing.  Science curiosity varies across people; but even modestly and weakly curious individuals possess some level of this disposition, which can be elicited with appropriately constructed materials. Thus, the same tools that can be used to propagate and leverage science curiosity can also be used to determine which forms of communication are most likely to excite science curiosity—and preempt defensive resistance—among a larger fraction of society.


How does science curiosity relate to various measures of cognitive proficiency?

I’m frequently asked how science curiosity, as we measure it, relates to education and to scores on one or another scale for measuring cognitive proficiency. For answering this question,  I think the information in a graphic display of overlapping probability density distributions is superior to the information in a correlation coefficient.

All these differences are “statistically signicant” (what difference wouldn’t be at N = 3000!). But are they practically significant?

I can’t confidently say.  They don’t look big to me, at least on the ≥ 90th percentile side. 

But at this stage in our ongoing study of science curiosity, we don’t have enough information to say that disparities of this magnitude will result in noticeable differences in how people behave; all we can say is that the higher the SCS score one group’s members are, the less vulnerable to politically motivated reasoning they will be.   

That's the opposite, of course, of what happens with the cognitive proficiency measures, from CRT to Ordinary Science Intelligence to Actively Open-minded Thinking to Numeracy.


Bookends in the study of individual differences in politically biased comprehension of science

Did a talk at the University of Oklahoma Center for Risk & Crisis Management last Thurs. The questions & discussion were really great.

Here are the main points, rationally reconstructed, that I made (slides here).

1. We know a lot about politically motivated reasoning (PMR) as a “main effect” in the processing of policy-relevant facts.  Generally speaking PMR refocuses individual attention away from “truth-convergent” and toward identity-protective styles of information processing, the goal of which is to promote formation of beliefs that effectively express individuals’ membership in and loyalty to opposing cultural groups.

2. We don’t know as much about individual differences in PMR.  That is, researchers so far have not paid as much attention to dispositions or personality traits that might either accentuate or mitigate the impact of PMR on information processing.

if you have one last click in your life-time supply, this is where to spend it!3. One thing we do know something about, however, is politically motivated system 2 reasoning: Various forms of cognitive proficiency—ones that no doubt help individuals to determine the truth in most settings—seem to aggravate or magnify PMR.  A good number of observational studies suggest this.  And CCP’s “Motivated Numeracy” study supplies experimental data indicating that individuals high in dispositions essential to science comprehension use cognitive proficiency to form and persist in identity-evincing beliefs.

4. There is also at least one measure of reasoning style that appears to have the opposite effect—i.e., that appears to constrain PMR.  That disposition is science curiosity.  Other science-comprehension-related dispositions seem to magnify PMR as the strength of those dispositions increase. The general effect of increased science curiosity, however, is the same on individuals’ of varying political outlooks. In addition, individuals who score highest on the Science Curiosity Scale (SCS) scores also do not polarize as much as their scores on the Ordinary Science Intelligence assessment increases.

5. What are the implications of all this? 

Well, first, it is a mistake to read this literature to imply that increased science comprehension is “bad.” The problem isn’t with that disposition; it is with a science communication environment that has become infused with antagonistic social meanings that transform positions on disputed decision-relevant forms of science with membership in and loyalty to opposing cultural groups.  The upshot, then, is that we should identify means of protecting the science communication environment from being polluted with such meanings so that we can get the benefit of the insights of those citizens who are most proficient in science comprehension.

click me! You'll be astonishedSecond, we should be exploring how science curiosity can be used to help detoxify a polluted science communication environment.  Can we foster science curiosity in the population, either as a fixed trait or as a state that characterizes their engagement with controversial issues?  Can we feature the open-mindedness of individuals high in science curiosity as models of the way in which citizens in a pluralistic self-governing community should reason?

You tell me!


*Now* where am I? Oklahoma City!

Am heading out early (today) to see what cool things the researchers at OU Center for Risk and Crisis Management are up to!

Will send postcards.


Science of Science Communication seminar: Session 8 reading list (climate change 2)

Feel free to comment if you are playing along at home . . . .


Hurry up & get your copy of "Expressive rationality of inaccurate perceptions" before sold out!

Now in print --

If can't leap paywall, the preprint is pretty close to final.



3 forms of "trust in science" ... a fragment

From something I'm working on . . . 

Three forms of trust in science

There are a variety of plausible claims about the role of science attitudes in controversies over decision-relevant science. These claims should be disentangled.

One such claim attributes public controversy to disagreements over the reliability of science. Generally speaking, people make decisions based on their understandings of the consequences of selecting one course of action over another. Science purports to give them information relevant to identifying such consequences: that vaccinating one’s children will protect them (and others) from serious harms; that the prevailing reliance on fossil fuels as an energy source will generate environmental changes inimical to human wellbeing, etc. How readily people will make use of this type of information will obviously depend on an attitude toward science—viz., that it knows what it is talking about.

We will call this attitude decisional trust in science.  Trust is often used to denote a willingness to surrender judgment to another under conditions that make the deferring party vulnerable.  People evince what we will call “decisional trust” in science when they treat the claims that science makes as worthy of being relied on under conditions  in which misplaced confidence would in fact be potentially very costly to them.

That attitude can be distinguished from what we’ll call institutional trust of science.  We have in mind here the claim that controversy over decision-relevant science often arises not from distrust of validly derived scientific knowledege but distrust of those who purport to be doing the deriving.  People who want to rely on science for guidance might still be filled with suspicion of powerful institutions—universities, government regulatory authorities, professions and professional associations—charged with supplying them with scientific information.  They might not be willing, then, to repose confidence in, and make themselves vulnerable to, these actors when making important decisions.

Both of these attitudes should be distinguished from still another kind of attitude that figures in some accounts of how science attitudes generate public controversy. We’ll call this one acceptance of the authority of science.

Science in fact competes for authority with alternative ways of knowing—albeit less fiercely today in liberal democratic societies than in other types of societies.  Religions, for example, tend to identify criteria for ascertaining truth that involve divine revelation and the privileged access to the same by particular individuals identified by their status or office.  Science confers the status of knowledge, in contrast, only on what can be ascertained by disciplined observation—in theory, anybody’s—and thereafter adjudicated by human reason—anyone’s—as a valid basis for inference.

The Royal Society motto Nullius in verba—“take no one’s word for it”—reflects a bold and defiant statement of commitment to the authority of science’s way of knowing in relation to alternatives that involve privileged access to revealed truths. This is—or certainly was at the time of Royal Society was founded—a profound stance to adopt.

 But it would of course be silly to think that the volume of knowledge science generates could possibly be made use of without “taking the word” of a great many people committed to generating knowledge in this way.  The authority of science as a way of knowing, in a practical sense, presupposes decisional trust in and institutional trust of science.

But it is perfectly plausible—perfectly obvious—that some people could be uneasy with science because they really don’t view its way of knowing as authoritative relative to one of its competitors.  We should be sure we are equipped to recognize that attitude toward science when we see it, so that we can measure the contribution it could be making to conflicts over science.


Where am I? Knoxville!

A triple header of talks today at U. Tenn.  I've been warmly greeted here consistent with the historic friendship of our respective states' university systems....


I think the Tennessee player is out of bounds? What do you think?



Science of Science Communication seminar: Session 9 reading list (teaching evolution)

 Here's another! (session 7 was on "science of science filmmaking"; session 8 on "climate, part 2" ... I'll post those "tomorrow"™.


Science of Science Communication seminar: Session 6 reading list (climate change 1)

Here you go!


Trust in science vs. reliance on religious faith--another fun GSS item

Any surprises here? (In case you don't remember, relatively religious peope have more "confidence" in "those running" the "science community" than in "those running "organized religion.")


Here's the model on which the 2nd figure is based.


Tomorrow: Sea level science communication panel


What do you make of *this*? More on partisan differences in trustworthiness of "university" scientists

Careful now . . . .

Like “yesterday's”™ item (WHICHSCI), this one (SCIIMP1) made a one-time-only appearance in 2006 GSS.

Companion items asked whether it was important that "the people who do [science] have advanced degrees in their field"; that "conclusions [be] based on solid evidence"; that "researchers carefully examine different interpretations, even ones they disagree with"; and that  "the results are consistent with religious beliefs." Responseswere all skewed in patterns that reflected a pro-science sensibility. Check out the GSS codebook if you are curious about toplines on those -- they are all skewed toward a pro-science outlook.

Here is regression model, in case anyone is interested.


Should I update my priors on partisanship & trust in industry vs. university scientists? By how much & in what direction?!

I'm still stuck in GSS can.  Actually, it's more like a bag of potato chips; you can't stop munching until you've emptied the thing.

But anyway, the 2006 GSS had an item that solicited respondents' attitudes toward "industry" vs. "university scientists." 

Well, "we all know" that conservatives hold university scientists in contempt for their effeminate, elitist ways & that liberals regard industry scientists as shills.

But here's what GSS says about partisanship & industry vs. university scientists . . . .

 WEKS strikes again?.. Or is this just more survey artifact?

Maybe this ...

is more informative?  Or will people w/ different priors just disagree about the practical significance of this difference in the probability of finding industry scientists less reliable than university ones?...


What can we *really* conclude from the GSS's 2010 item on the risk of GM/GE crops? An expert weighs in

Never fails! My posts from “yesterday”™ and “the day before yesterday”™ have lured a real, bona fide expert to come forward. The expert in this case is William Hallman, the Chair of the Department of Human Ecology and faculty member of the Department of Nutritional Sciences and of the Bloustein School of Planning and Public Policy at Rutgers University. He is also currently a Resident Scholar in the Science of Science Communication initiative at the Annenberg Public Policy Center.

William Hallman:

As you probably suspect, I am sympathetic to your argument that because so few Americans really know anything about them, asking people about the safety of GM crops is problematic in general.  So, starting with the premise that most Americans are unlikely to have a pre-formed opinion about the safety of GM crops before being asked to think about the issue in the survey, I think that we should assume that most of the answers given to the question are impressionistic, and likely influenced by the wording of the question itself. Which is:

 “Do you think that modifying the genes of certain crops is: “Extremely dangerous for the environment . . . Not dangerous at all for the environment.”

I agree with the idea suggested by @Joshua, that because the risk targeted is “danger to the environment,” it is plausible that the differences seen are because conservative Republicans may be less likely to endorse the idea that anything is dangerous for the environment.  If you were to ask about risks to human health, you might get a different pattern of responses.

But that’s not all.  The root of the question refers to crops. That is, to plants/agriculture, and not to food.  So, are conservative Republicans also less likely to view crops/agriculture a threat to the environment in general? My guess is ‘probably’, but I don’t have good data to back up that assertion.

But wait, there’s more. . .  The question doesn’t actually refer to GMO’s.  It asks whether modifying the genes of crops is dangerous.  I’m don’t know where the specific question falls in the overall line of questioning.  Were there questions about GMOs preceding this?  If not, participants may not have grasped that the question was really about Genetic Engineering.  Technically, you can “modify the genes of certain crops” through standard crossbreeding/hybridization methods. It is, in part, why the FDA has never liked the broad term “Genetic Modification.”  If the question had asked, “Do you think the genetic engineering of crops is dangerous for the environment,” I think you would get a different pattern of responses.  As a side note, I have ancient data that shows that more than a decade ago, Americans were as likely to approve of foods produced through crossbreeding as they were for foods produced through genetic engineering.  


More GM food risk data to chew on--compliments of GSS

Okay, then. 

Here are some simple data analyses that reflect how a wider range of GSS science-attitude variables relate to perceptions that GM crops harm the environment, and how that relationship is affected by partisanship.

I’d say they tell basically the same story as my initial analysis of CONSCI, the item that measures “confidence” in “those running” the “scientific community”: basically, that higher, pro-science scores on these measures is associated with less concern with GM crops. This is so particularly among right-leaning respondents; indeed, left-leaning ones don't really move at all when one looks at risk perceptions in relation to the composite "proscience" scale.

There is also a small zero-order correlation (r (1189) -0.12, p < 0.01) between GENEGEN—the GSS’s 2010 GM risk perception item—and the composite left-right scale that I constructed and that is coded so that higher scores denote greater conervatism.

All of this is out of keeping with the usual finding of a lack of partisan influence on GM food risks. I have reported many times that there is no partisan effect when GM food risks are measured with the Industrial Strength Risk Perception measure.  Surveys conducted by other opinion analysts using different measures have shown the same thing.

So what’s going on?

One possibility, suggested by loyal listener @Joshua, is that the GSS’s GM-concern item looks at people’s anxiety about the impact of GM crops on “the environment” as opposed to the safety of consuming GM foods.  The “environmental risk” cue is enough information for the public—which is otherwise pretty clueless (“cueless”?) about GM risks—to recognize how the issue ought to cohere with their political outlooks.

Seems persuasive to me . . . but what do you—the 14 billion daily readers of this blog—think?!

Oh, one more thing: I did a quick search and found only one paper that addresses partisanship and the GSS’s “GENEGEN” item. If others know of additional ones, please let me & all the readers know.

Oh, one more "one more" thing. Here are the raw data: