follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Do conservatives become more concerned with climate risks as their trust in science increases?

It is almost universally assumed that political polarization over societal risks like climate change originate in different levels of trust in scientists: left-leaning people believe in human-caused climate change, it is said, because they have a greater degree of confidence in scientists; so-called “conservative Republicans," in contrast, are said distrust of science and scientists and thus are predisposed to climate skepticism.

But is this right? Or are we looking at another form of the dreaded WEKS disease?

Well, here’s a simple test based on GSS data.

Using the 2010 & 2016 datasets (the only years in which the survey included the climate-risk outcome variable), I cobbled together a decent “trust in science” scale:

scibnfts5: “People have frequently noted that scientific research has produced benefits and harmful results. Would you say that, on balance, the benefits of scientific research have outweighed the harmful results, or have the harmful results of scientific research been greater than its benefits?” [5 pt: strongly in favor beneficial . . .strongly in favor of harmful results.”)

consci: “As far as the people running [the science community] are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them,”

scientgo: “Scientific researchers are dedicated people who work for the good of humanity.” [4 points: strongly agree . . . strongly disagree)

scienthe: “Scientists are helping to solve challenging problems.” [4 points: strongly agree . . . strongly disagree)

nextgen: “Because of science and technology, there will be more opportunities for the next generation” [4 points: strongly agree . . . strongly disagree”]

advfont.  “Even if it brings no immediate benefits, scientific research that advances the frontiers of knowledge is necessary and should be supported by the federal government.” [4 points: strongly agree . . . strongly disagree”]

scientbe. “Most scientists want to work on things that will make life better for the average person.” [4 points: strongly agree . . . strongly disagree”]

These items formed a single factor and had a Cronbach’s α score of 0.72.  Not bad. I also reverse coded as necessary so that for every item a higher score would denote more rather than less trust of science.

Surprisingly, the GSS has never had a particularly good set of climate-change “belief” and risk perception items. Nevertheless, they have sometimes fielded this question: 

TEMPGEN: “In general, do you think that a rise in the world's temperature caused by the `greenhouse effect', is exptremely dangers for the evironment . . . not dangerous at all for the environment?” [5 points: “exptremely dangers for the evironment . . . not dangerous at all for the environment?”]

I don’t love this item but it is a cousin of the revered Industrial Strength Risk Perception Measure, so I decided I’d give it a whirl. 

I then did some regressions (after of course, eyeballing the raw data).

In the first model, I regressed a reverse-coded TEMPGEN on the science-trust scale and “left_right,” a composite political outlook scale formed by aggregating the study participants’ self- (α= 0.66 ).  As expected, higher scores on the science-trust scale predicted responses of “very dangerous” and “extremely dangers,” while left_right predicted responses of “not very dangerous” and “not dangerous at all.”

If one stops there, the result is an affirmation of  the common wisdom.  Both political outlooks and trust in science have the signs one would expect, and if one were to add their coefficients, one could make claims about how much more likely relatively conservative respondents would be to see greater risk if only they could be made to trust science more.

But this form of analysis is incomplete.  In particular, it assumes that the contribution trust in science and left_right make to perceptions of the danger of climate change are (once their covariance is partialed out) independent and linear and hence additive.

But why assume that trust in science has the same effect regardless of respondents’ ideologies? After all, we know that science comprehension’s impact on perceived climate-change risks varies in relation to ideology, magnifying polarization.  Shouldn’t we at least check to see if there is a comparable  interaction between political outlooks and trust?

So I created a cross-product interaction term and added it to form another regression model.  And sure enough, there was an interaction, one predicting in particular that we ought to expect even more partisan polarization as right- and left-leaning individuals' scores on the trust-in-science scale increased.

Here’s what the interaction looks like:

Geez!  Higher trust promotes greater risk concern for left-leaning respondents but has essentially no effect whatsoever on right-leaning ones.

What to say?...

Well one possibility that occurs to me is based on biased perceptions of scientific consensus.  Experimental data suggest that ordinary persons of diverse outlooks are more likely to notice, assign significance to, and recall instances in which a scientist  took the position consistent with their cultural group's than ones in which a scientist took the opposing position.  As a result, people end up with mental inventories of expert opinion skewed toward the position that predominates in their group. If that's how they perceive the weight of expert opinion, why would they distrust scientists?

But I dunno. This is just post hoc speculation.

Tell me what you think the answer is – and better still, how one could design an experiment to test your favored conjecture against whatever you think the second most likely answer is.


New paper: Misperceptions, Misinformation & the Logic of Identity Protective Cognition

Paper in draft; comments welcome!


Asymmetry thesis--now we're going to need a meta-meta-analysis

Check out the dueling meta-analyses of "asymmetry thesis" studies!

I'll tell you what I think "tomorrow"™, but in meantime, why not tell me what you think "today"™?


Are Republicans and Democrats more divided on or each more supportive of federal spending on science? Both, according to Pew Research Center

As the 14 billion readers of this blog are aware, I’ve been culling science-attitude data from the GSS for the last few weeks.  The gist of it is that there’s not a whole lot of difference between the views of politically diverse citizens.

Displaying impeccable timing, on May 1, Pew Research Center released some interesting  data (as their data always are) on support for “increased” federal spending on science that seems to contravene that conclusion.  Under the headline “Democrats far more supportive than Republicans of federal spending for scientific research,” they report a “wide and growing partisan gap . . . over how much government should spend for scientific research.”

The question has a counterpart in the GSS.  While the most recent GSS data is 2016, in the period in which the two surveys—GSS’s and Pew’s—overlap, the former has always suggested much less of a gap in partisan views.

This goes to show how much subtle language differences can make in respose to survey items and cautions against relying overmuch on any single measure when trying to assess attitudes. The better approach is to explore larger groups of items that get at the same thing & see if they form a scale, at which point covariances can supply a more reliable yardstick of who feels what way and why.

There are two more interesting things (at least) about Pew’s data.

One is that both Republicans and Democrats supported spending more and opposed spending less for science in the Pew 2017  data relative to their positions in the last 8 yrs. (the only period for which Pew has reported data for both responses). If there were reason to think these kinds of sentiments have any influence on Congress (there’s not much), this would be good news.

The other is that the widened gap between Republicans and Democrats is actually attributable not to a decline in the support of Republicans for more science funding—again, in Pew’s 2017 data, Republicans are more supportive than previously—but in the huge 14 pct point jump in Democrats who support more spending.  

Why Democrat support increased so dramatically merits more study in itself.

But in any case, relative to previous yrs, the Pew govt-spending data are consistent with the inference that there is more “pro-science” sentiment all  around in 2017 than previously. 

That’s pretty interesting. 

What do you think?


Beware of reacting too fast to the "TOOFAST" item in the GSS

From something I'm working on . . .

c. Authority of science. Some GSS items are tailor-made for detecting conflict over the authority of science—our third science attitude. The one that has been asked the most consistently seeks respondents’ agreement or disagreement with the statement that “one trouble with science is that it makes our way of life change too fast” (TOOFAST). The volatile upticks and downticks in this item have been duly reported in alternately positive and negative ways in the NSF Indicators.  Thus, pointing to “a substantial drop” in affirmative responses in the 2012 GSS, the 2014 Indicators (p. 7-28) reported with evident relief that “fewer Americans said they were worried about the pace of change.”  Yet two years later, the NSF lamented that “Americans increasingly worry that science is making life ‘change too fast.’ ” “About half of Americans,” the Indicators advised, “expressed this view in 2014, up from about one-third in 2004” (2016, p. 7|4).

On closer inspection, though, there doesn’t seem to be anything about responses to TOOFAST—in whatever direction they move—that should arouse concern about the breadth of respect for the authority of science. A simple zero-order correlation, for example, confirms that at every level of this four-point agree-disagree item, study participants have positive expectations about the future benefits that science will confer on society (Figure 14). Indeed, respondents at every level of “TOOFAST” support the funding of science regardless of whether doing so confers “immediate benefits.”

“TOOFAST” might be measuring something.  But it is not measuring an attitude that reflects ambivalence toward the authority of science. 


"Where is everybody?" The missing "distrust of science" measures

From something I'm working on . . . .

4.1. “Where is everybody?” 

We adopted a critical stance in § 3 on existing measures of generalized science attitudes. We can think of two possible explanations for the absence of something more supportive of the view that general attitudes toward science are responsible for particular DRS [decision-relevant science] controversies. One is that  there just isn’t any substantial variation in the sorts of attitudes we have been describing, at least in the liberal democratic societies that feature public conflict over science issues. 

If a disposition is relatively uniform across the population, it won’t be possible, psychometrically, to form scales to measure it (Tinsley & Weiss 2000).  Items that admittedly do measure it won’t covary—because they won’t vary.  Accordingly, it will be impossible even to find items that one can be confident are measuring the disposition, much less find multiple ones to combine into a scale.

Is it plausible to think there is this degree of uniformity in “science attitudes” of the sort we identified in the second section? Looking around, we see very little evidence of any meaningful ambivalence toward the authority of science as a way of knowing.  Indeed, we suspect that most people in the US would be hard pressed at this point to even imagine what it would look like to live in a manner that didn’t treat science as authoritative over the kinds of matters to which it claims to speak. To be sure, there are grumblings about the performance of the institutions of science, but people—acting in their own capacity and through their democratically accountable agents—continue to support funding those responsible today for producing science. They do that because they think that the information is valuable for solving their problems: as we said, trust in science for decision making and trust of science institutions are linked.

But the question isn’t strictly how plausible it is that there is a uniformly high level of the various science attitudes we described in § 3.  It is instead how much  more plausible this conclusion is than the only other explanation we can think of for the absence of measures that detect meaningful levels of variance: that scholars of public attitudes toward science just haven’t realized that the “science attitude” measures” they are working with are inadequate, or have been too preoccupied answering related questions to identify better ones. 

We think that explanation is improbable.  There are too many smart and highly productive researchers in this field.

Enrico Fermi’s famous Bayesian “proof” against intelligent forms of extraterrestrial life (Gleiser 2016) applies at least as forcefully to the existence of meaningful forms of variance in dispositional trust in science, institutional trust of science, and acceptance of the authority of science: if sources of variance in these dispositions existed, someone would have found them by now.


Gleiser, M. The Simple Beauty of the Unexpected: A Natural Philosopher's Quest for Trout and the Meaning of Everything (University Press of New England, 2016).

Tinsley, H.E.A. & Brown, S.D. Handbook of Applied Multivariate Statistics and Mathematical Modeling (Elsevier Science, 2000).



More GSS data on "anti-science" phantom

Conservative citizens are less likely than liberal ones to believe that humans are causing global warming.

Religiously inclined citizens are less inclined to believe human beings evolved from another species of animal.

I get how one might hypothesize that these results are a consequence of an “anti-science” attitude on the part of individuals so defined.  Some more generalized ambivalence or even hostility to science and/or scientists on the part of these citizens, the argument goes, causes the more specific forms of nonacceptance of scientific evidence relating to these issues.

The problem is that when one looks in the places where one would expect to see the more generalized anti-science attitude, it ain’t there.

I’ve already described how both religious and conservative individuals have a high degree of “institutional confidence” in the “scientific community,” a standard General Social Survey item.

Well, if you look at the more specific “science attitude” items in the GSS, one sees the same thing: more religious citizens and more conservative ones both have pro-science attitudes.  

I pointed out a couple days ago that religious and conservative citizens, just like secular and liberal ones, credit science for making our lives better.

Now consider this:

One can always save the conservative & religious “anti-science” claim by simply treating skepticism about climate change and disbelief in human evolution as being anti-science.

But at that point the claim becomes a (boring) tautology.

Once one equates being anti-science with these positions, “they’re anti-science” is no longer an explanation for why religious and conservative citizens hold these positions—stances that are all the more peculiar once one sees that these citizens, like ones who do believe in climate change and evolution, have generally supportive attitudes toward scientists and scientific research.

Who believes what and why on these issues is an interesting question. But here as elsewhere “anti-science” is a mental roadblock to answering it in a scientific way.


Oxford Handbook on Science of Science Communication: Preorder this now, before sells out!


At $160.00, this collection is actually much cheaper than most books of its genre. Plus it contains more insight.  How could you go wrong, then, in buying it or buying multiple copies even?


Nature Climate Change commentary: out of the lab & into the field


Are scientists unlikely to be religious persons?! One of the weirdest survey results I've ever seen

That 41% disagree really surprises  That a majority of the public (59%) disagrees with this item shocks me. I would have bet at most only 25% would disagree with this statement; I also would have predicted that religiously inclined people would be much more likely to agree with it.

Can someone explain--and in way that can be tested (i.e., no just-so stories that evade corroboration)?


A token (or 2) of the Liberal Republic of Science

In honor of the march:


Weekend update: a 10-yr reassessment of "expressive overdetermination"

From Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007). For sure, I still would define the problem this way. But I'm less sure the solution of "expressive overdetermination" makes sense. It's out of keeping, I think, with SE Fla. political climate science and with cognitive dualism. But maybe the point is that there are more solutions--or potential solutions--than just one...


 The nature of political conflict in our society is deeply paradoxical. Despite our unprecedented knowledge of the workings of the natural and social world, we remain bitterly divided over the dangers we face and the efficacy of policies for abating them.

The basis of our disagreement, moreover, is not differences in our material interests (that would make perfect sense) but divergences in our cultural worldviews. By virtue of the moderating effects of liberal market institutions, we no longer organize ourselves into sectarian factions for the purpose of imposing our opposing visions of the good on one another. Yet when we deliberate over how to secure our collective secular ends, we end up split along exactly those lines.

The explanation, I’ve argued, is the phenomenon of cultural cognition. Individual access to collective knowledge depends just as much today as it ever did on cultural cues. As a result, even as we become increasingly committed to confining law to attainment of goods accessible to persons of morally diverse persuasions, we remain prone to cultural polarization over the means of doing so. Indeed, the prospect of agreement on the consequences of law has diminished, not grown, with advancement in collective knowledge, precisely because we enjoy an unprecedented degree of cultural pluralism and hence an unprecedented number of competing cultural certifiers of truth.

If there’s a way to mitigate this condition of cognitive illiberalism, it is by reforming our political discourse. Liberal discourse norms enjoin us to suppress reference to partisan visions of the good when we engage in political advocacy. But this injunction does little to mitigate illiberal forms of status competition: because what we believe reflects who we are (culturally speaking), citizens readily perceive even value-denuded instrumental justifications for law as partisan affirmations of certain worldviews over others.

Rather than implausibly deny our cultural partiality, we should embrace it. The norm of expressive overdetermination would oblige political actors not just to seek affirmation of their worldviews in law, but to cooperate in forming policies that allow persons of opposing worldviews to do so at the same time. Under these circumstances, citizens of diverse cultural orientations are more likely to agree on the facts—and to get them right—because expressive overdetermination erases the status threats that make individuals resist accurate information. But even more importantly, participation in the framing of policies that bear diverse meanings can be expected to excite self-reinforcing, reciprocal motivations that make a culture of political pluralism sustainable.

Ought, it is said, implies can. Contrary to the central injunction of liberalism, we cannot, as a cognitive matter, justify laws on grounds that are genuinely free of our attachments to competing understandings of the good life. But through a more sophisticated understanding of social psychology, it remains possible to construct a form of political discourse that conveys genuine respect for our cultural diversity.


The challenge of doing science journalism in a polluted science communication environment


Boy, this is a tough one.

It's not hard to see how linking Zika to climate change risks infecting the former with the polarizing virus carried by the latter.  Not hard, either to model such an effect in the lab (Kahan, Jamieson, Landrum & Winneg 2017).

On the other hand, if this piece is conveying the truth about the health hazards being created or magnified by climate change, isn't such reporting essential?

I guess I have two reactions.

First, highlighting Gore is not a good idea.  He brands as a partisan issue anything he gets involved with.

Second, the most important thing is that science journalists engage in shared critical reflection on dilemmas of this kind. Such reflection attests to and helps inculcate a professional norm, one that assures journalists exercise their judgment in a manner sensitive to the impact of their craft on the science communication environment.

That sort of norm, and the quality of deliberation it promotes, were clearly on display in the science community's debate about the effect of their upcoming march on Washington.

The importance of having a collective discussion like that, on all the occasions that warrant it, might turn out to be the most valuable lesson of that event.

So what do you think?



UMass SES program: a new science of science communication for a world itself quite new (lecture summary & slides)

Did lecture yesterday at UMass Amherst to remark the launch of the University’s School of Earth & Sustainability program.  Members of the audience asked fantastic questions, leaving me once again regretful that I had not spoken for a shorter period of time in order to make room for more audience reactions.

My message was that the SES program is a model—one of many, but many are needed to build a knowledge base—of how to combine the study of decision-relevant science with the study of science communication. Doing so is essential to assure that the value of the  former is recognized by the public and, in particular, not annihilated by knowledge-enervating forms of group status competition

What causes conflict over decision-relevant science, I argued, is a polluted science communication environment. Devising means of protecting that environment and repairing it when protective measures fail should be one of the primary goals of the science of science communication. 

UMass’s  School of Earth and Sustainability is commendably modeling that understanding, and we can all learn a lot from—and be inspired by--what they are doing.

The expositional strategy I used to guide the audience into critical engagement with this thesis consisted in setting up & knocking down popular misconceptions about the source of public conflict over science, including deficits in public science comprehension;  creeping anti-science attitudes in American society; and orchestrated misinformation.  

Throughout the presentation I also  took aim at the asymmetry thesis, which posits that the incidence of identity-protective cognition is disproportionately concentrated on the right in American society.  I’ll have more to say about that “tomorrow,”™  when I give me reactions to a new pair of newly released opposing meta-analyses on this topic, one by Jon Jost & another by Peter Ditto & collaborators.

Slides here.


Where am I? version 502

Two events this week:



Last session in Science of Science Communication 2017

Not usually where we end, but frolicks & detours along the way were worthwhile


Another genuinely informative study of consensus messaging



Science of Science Communication seminar: Session 10 reading list (teaching climate change)

More Science of #Scicomm . . .


Using science curiosity ... a fragment

From something I'm working on . . . 

. . . Taken together, these studies suggest that science curiosity ought to be viewed as a signal virtue of democratic citizenship in a culturally diverse society.  The information-processing style of these citizens ought to be propagated and extended as an antidote to the enfeebling impact of group rivalries on citizens’ capacity to identify valid science....

a. [A program to employ science curiosity for purposes of enlightened self-government must answer three questions.] First, how can the stock of citizens who are curious about science be enlarged? Presumably, this disposition forms at a relatively young age.  We thus anticipate that this part of the research program will focus on the development of primary-, middle-, and high-school education materials suited to instill curiosity in students.  To date, efforts to develop such materials have met with little success, primarily because educators have not been equipped with reliable and valid measures to test the impact of various pedagogical strategies aimed at cultivating science curiosity (Blalock et al. 2008).  The APPC/CCP Science Curiosity Scale does furnish a valid and reliable measure for adults, and we are currently engaged in exploratory work to develop a version of the scale that can be used for middle-school students.

b. Second, how can the dispositions of the most science-curious citizens be leveraged to promote more productive engagement with decision-relevant science in our political discourse? Field studies conducted by CCP suggests that members of culturally diverse groups display greater open-mindedness when they observe trusted group members evincing confidence in the validity of decision-relevant science by their actions and words.  To multiply the number of such interactions, it makes sense for communicators to seed culturally diverse groups with members who have already formed positive views of decision-relevant science (Kahan 2015). . . .

c. Third, how can the “frontier” of science curiosity be moved back when communicators engage with ordinary citizens?  Individuals  tend to spontaneously and aggressively resist information that challenges positions associated with their group.  The appetite for surprise and wonder associated with science curiosity, in contrast, effectively stifles that form of defensive information processing.  Science curiosity varies across people; but even modestly and weakly curious individuals possess some level of this disposition, which can be elicited with appropriately constructed materials. Thus, the same tools that can be used to propagate and leverage science curiosity can also be used to determine which forms of communication are most likely to excite science curiosity—and preempt defensive resistance—among a larger fraction of society.


How does science curiosity relate to various measures of cognitive proficiency?

I’m frequently asked how science curiosity, as we measure it, relates to education and to scores on one or another scale for measuring cognitive proficiency. For answering this question,  I think the information in a graphic display of overlapping probability density distributions is superior to the information in a correlation coefficient.

All these differences are “statistically signicant” (what difference wouldn’t be at N = 3000!). But are they practically significant?

I can’t confidently say.  They don’t look big to me, at least on the ≥ 90th percentile side. 

But at this stage in our ongoing study of science curiosity, we don’t have enough information to say that disparities of this magnitude will result in noticeable differences in how people behave; all we can say is that the higher the SCS score one group’s members are, the less vulnerable to politically motivated reasoning they will be.   

That's the opposite, of course, of what happens with the cognitive proficiency measures, from CRT to Ordinary Science Intelligence to Actively Open-minded Thinking to Numeracy.

Page 1 ... 4 5 6 7 8 ... 48 Next 20 Entries »