follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Science curiosity research program

From something I'm working on. More anon. . . .

The Science Curiosity Research Program

We propose a program for the study of science curiosity as a civic virtue in a polarized society.  

1. It has been assumed (very reasonably) for many years that enlightened self-government demands a science-literate citizenry. Perversely, however, recent research has shown that all manner of reasoning proficiency—from cognitive reflection to numeracy, from actively open-minded thinking to science literacy—magnifies political polarization on policy-relevant science.

2. The one science-comprehension-related disposition that defies this pattern is science curiosity. In our research, we define science curiosity as the motivation to seek out and consume scientific information for personal pleasure. The Cultural Cognition Project Science Curiosity Scale (“SCS”) enables the precise measurement of this disposition in members of the general public.

Developed originally to promote the study of public engagement with science documentaries, SCS also has also been shown to mitigate politically motivated reasoning. Politically motivated reasoning consists in the disposition to credit or dismiss scientific evidence in patterns that reflect and reinforce individuals’ membership in identity-defining groups.  It is the psychological mechanism that underwrites persistent political controversy over climate change, handgun ownership, the HPV vaccine, nuclear waste disposal, and a host of other controversial issues.

Individuals who score high on SCS, however, display a remarkable degree of resistance to this dynamic.  Not only are they less polarized than other citizens with comparable political predispositions. They also are demonstrably more willing to search out and consume scientific evidence that runs contrary to their political predispositions.

The reason why is relatively straightforward.  Politically motivated reasoning generates a dismissive, identity-protective state of mind when individuals are confronted with scientific evidence that appears to undermine beliefs associated with their group identities.  In contrast, when one is curious, one has an appetite to learn something surprising and unanticipated—a state of mind diametrically opposed to the identity-protective impulses that make up politically motivated reasoning.

These features make science curiosity a primary virtue of democratic citizenship. To the extent that it can be cultivated and deployed for science communication, science curiosity has the power to quiet the impulses that deform human reason and that divert dispositions of scientific reasoning generally from their normal function of helping democratic citizens to recognize the valid policy-relevant science.

3.  Perfecting the techniques for cultivating and deploying of science curiosity is the central aim of our proposed research program.  Certain of the projects we envision aim to instill greater science curiosity in primary and secondary school students as well as adults.  But still others seek to harness and leverage the science curiosity that already exists in democratic citizens.  Specifically, we propose to use SCS to identify the sorts of communications that arouse curiosity not only in the individuals who already display the most of this important disposition but also in those who don’t—so that when they are furnished evidence that challenges their existing beliefs, they will react not with defensive resistance but with the open-minded desire to know what science knows.


Guest post: Some weird things in measuring belief in human-caused climate change

From an honest-to-god real expert--a guest post by Matt Motta, a post doctoral fellow associated with the Cultural Cognition Project and Annenberg Public Policy Center. Matt discusses his recent paper, An Experimental Examination of Measurement Disparities in Public Climate Change Beliefs.

 Do Americans Really Believe in Human-Caused Climate Change? 

Matt Motta (@matt_motta)

Image result for matt motta minnesotaDo most Americans believe that climate change is caused by human activities? And what should we make of recent reports (e.g., Van Boven & Sherman 2018) suggesting that self-identified Republicans largely believe in climate change?

Surprisingly, given the impressive amount of public opinion research focused on assessing public attitudes about climate change (see: Capstick et al., 2014 for an excellent review), the number of Americans (and especially Republicans) who believe that climate change is human caused actually a source of popular and academic disagreement.

For example, scholars at the Pew Research Center have found that less than half of all Americans, and less than a quarter of Republicans, believe that climate change is caused by human activity (Funk & Kennedy 2016). In contrast, a team of academic researchers recently penned an op-ed in the New York Times (Van Boven & Sherman 2018; based on Van Boven, Ehret, & Sherman 2018) suggesting that most Americans, and even most Republicans, believe in climate change – including the possibility that it is human caused.

In a working paper, my coauthors (Daniel Chapman, Dominik Stecula, Kathryn Haglin and Dan Kahan) and I offer a novel framework for making sense of why researchers disagree about the number of Americans (and especially Republicans) who believe in human caused climate change. We argue that commonplace and seemingly minor decisions scholars make when asking the public questions about anthropogenic climate change can have a major impact on the proportion of the public who appears to believe in it.

Specifically, we focus on three common methodological choices researchers must make when asking these questions. First, scholars must decide whether they want to offer “discrete choice” or Likert style response options. Discrete choice responses force respondents to choose between alternative stances; e.g., whether climate change is human caused, or caused by natural factors. Likert-style response formats instead ask respondents to assess their levels of agreement or disagreement with a particular argument; e.g., whether one agrees or disagrees that climate change is human caused.

Likert-style response can be subject to “acquiescence bias,” which occurs when respondents simply agree with statements, potentially to avoid thinking carefully about the question being asked. Discrete choice response formats can reduce acquiescence bias, but allow for less granularity in expressing opinions about an issue. Whereas the Pew Study mentioned earlier made use of discrete style response options, the aforementioned op-ed made use of Likert style responses (and found comparatively higher levels of belief in anthropogenic climate change).

Second, researchers must choose whether or not to offer a hard or soft “don’t know” (DK) response option. Hard DK options expressly give respondents the opportunity to report that they do not know how they feel about a certain question. Soft DK responses, on the other hand, allow respondents to skip a question, but do not expressly advertise their ability to not answer it.

Hard DKs have the benefit of giving those who truly have no opinion about a particular prompt to say so; rather than either guess randomly, or – especially when Likert style questions – simply agree with the prompt. However, expressly offering a DK option risks that respondents will simply indicate that they “don’t know” rather than engage more effortfully with the survey. Again drawing on the two examples described earlier, the comparatively pessimistic Pew study offered respondents a hard DK, whereas the work summarized in the New York Times op-ed did not.

Third, researchers have the ability to offer text that provides basic background information about complex concepts; including (potentially) anthropogenic climate change. This approach has the benefit of making sure that respondents have a common level of understanding about an issue, before answering questions about it. However, scholars must choose the words provided in these short “explainers” very carefully – as information presented there may influence how respondents interpret the question.

For example, the research summarized in the New York Times op-ed described climate change as being caused by “increasing concentrations of greenhouse gasses.” Although this text does not attribute greenhouse gas emissions to any particular human source, it is important to keep in mind that skeptics may see climate change as the result of factors having nothing to do with gas emissions (e.g., that the sun itself is responsible for increased temperatures). Consequently, this text could lead respondents toward providing an answer that better matches scientific consensus on anthropogenic climate change.

We test the impact of these three decisions on the measurement of anthropogenic climate change attitudes, in a large demographically online survey of American adults (N = 7,019). Respondents were randomly assigned to answer one of eight questions about their belief in anthropogenic climate change; each varying one of the methodological decisions described above, and holding all other factors constant.

The results are summarized in the figure below. Hollow circles are number of respondents in each condition who purport to believe in human-caused climate change, with 95% confidence intervals extending outward from each one. The left-hand pane plots these quantities for the full sample, and the right-hand pane does the same for just self-identified Republicans. The elements varied in each experimental condition are listed in the text just below the figure.

Generally, the results suggest that minor differences in how we ask questions about anthropogenic climate change can increase the number of Americans (especially Republicans) who appear to believe in it.  For example, Likert style response options (conditions 5–8) always produce higher estimates of the number of Americans and Republicans than discrete choice style questions (conditions 1–4).

At times, these differences are quite dramatic. For example, Condition 1 mimics the way Pew (i.e., Funk & Kennedy 2016) ask questions about anthropogenic climate change;  using discrete-choice questions that offer a hard DK option with no “explainer text.” This method  suggests that 50% of Americans, and just 29% of Republicans, believe that climate change is caused by human activities.

Condition 8, on the other hand, mimics method used in the piece reported in the aforementioned op-ed; featuring Likert-style response options, text that explains that climate change is caused by the greenhouse effect, and no explicit DK option. In sharp contrast, this method finds that 71% of Americans and 61% of Republicans believe that climate change is human caused. This means that the methods used in Condition 8 more than double the number of Republicans who appear to believe in human caused climate change.

We think that these results offer readers a useful framework for making sense  public opinion about anthropogenic climate change. Our research urges readers to pay careful attention to the way in which public opinion researchers ask questions about anthropogenic climate change, and to consider how those decisions might increase (or decrease) the number of Americans who appear to believe in anthropogenic climate change. Of course, we do not propose a single measurement strategy as a “gold standard” for assessing opinion about anthropogenic climate change. Instead, we hope that these results can readers be better consumers of public opinion about climate change.


Capstick, S., Whitmarsh, L., Poortinga, W., Pidgeon, N., & Upham, P. International trends in public perceptions of climate change over the past quarter century. Wiley Interdisciplinary Reviews: Climate Change , 6(1), 35-61. (2015).

Ehret, P. J., Van Boven, L., & Sherman, D. K. (2018). Partisan Barriers to Bipartisanship: Understanding Climate Policy Polarization. Social Psychological and Personality Science, 1948550618758709.

Funk, C., & Kennedy, B. The politics of climate. Pew Research Center. Retrieved from: (2016, Oct 4)

Van Boven, L. & Sherman D. Actually, Republicans Do Believe in Climate Change. New York Times  (2018, July 28)

Van Boven, L., Ehret, P. J., & Sherman, D. K. Psychological barriers to bipartisan public support for climate policy. Perspectives on Psychological Science , 13(4), 492-507. (2018).



Science literacy, science curiosity, and education

A science-curious commenter asked me what the relationship was between educational attainment and scores on the Ordinary Science Intelligence assessment (OSI) and on the Science Curiosity Scale (SCS), respectively.

I tried to entice him or her to make a prediction, so that we could have a proper WSMD? JA!, but he or she then fell silent.  I had the data ready to report, though, and figured they were interesting enough to share with the site's 12.3 billion readers (yes, we’re down 1.7 billion; suspiciously, subscriptions to the Gelman blog have increased by that amount).

Matching the pattern observed in relation to other demographic characteristics, the science-curiosity gap between individuals of relatively low and relatively high education levels is quite modest in comparison to the gap between these respective groups' OSI scores. (Consider, too, how much more informative, in a practical sense, the overlapping PDDs are compared to the regression-line plots.)

More evidence, then, that the social and economic conditions that generate inequality in science comprehension pose a much smaller barrier to being the sort of person who is awed by the insights of scientific inquiry. 

I think that’s pretty cool.


Some (very compact) reflections on the science communication environment; on the pollution of it; and on the need for self-conscious, evidence-informed protection of it

My answer to two questions--what sorts of emerging technologies need science communication attention, and what form-- in preparation for an upcoming roundtable discussion.

            0.  The “science communication environment” (SCE) comprises the sum total of institutions, processes, and norms that connect public decisionmaking with the best available scientific evidence. Conditions that disrupt these influences can be viewed as forms of SCE pollution.  One particularly toxic form of such pollution consists in social meanings that fuse positions on science-informed issues with citizens’ cultural identities.  This dynamic is at the root of polarization over climate change,  nuclear power, and other issues (Jamieson, Kahan & Scheufele 2017).

            1.  The science of science communication supplies methods for predicting which new forms of decision-relevant science are vulnerable to this pathology (Kahan 2015). Genome editing, geoengineering, and AI all merit investigation because of their affinity with existing technologies that generate polarization.

            2. The U.S. is hobbled by the absence of any agency charged with protecting SCE. The resulting void leaves the fate of new forms of decision-relevant science vulnerable to chance and strategic behavior. The consequences of such neglect are illustrated by the career of the HPV vaccine (Kahan 2013). Just as OMB now screens all administrative actions for costs and benefits, some agency could evaluate the SCE impact of such actions.


Jamieson, K.H., Kahan, D.M. & Scheufele, D.A. eds.  Oxford Handbook of the Science of Science Communication (Oxford Univ. Press, New York, 2017).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm., 14, 1-12 (2015).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).


new working paper: How to inflate/deflate measurement of Republican belief in human-caused climate change

This paper was written more or less in response to van Boven, Ehret & Sherman (2018), who  (in an academic paper and in a companion New York Times op-ed) reported the finding that "most Republicans believe in climate change," including human-caused climate change. For me, the bottom line is that scholars should be careful not to mistake survey artifacts for shifts in public opinion.


Van Boven, L., Ehret, P.J. & Sherman, D.K. Psychological Barriers to Bipartisan Public Support for Climate Policy. Perspectives on Psychological Science 13, 492-507 (2018).


Weekend update--Cultural cognition dictionary/glossary whatever

If you are interested in some sight-site-seeing, check out the CC dictionary/glossary or whatever. It now has over three dozen entries.



Science literacy vs. Science Curiosity

The social, cultural, and economic influences that generate inequalities in science comprehension have considerably less impact on science curiosity. 

That's how I interpret these data: