follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Entries by Dan Kahan (30)


Shut up & update! . . . a snippet

Also something I've been working on . . . .

1. “Evidence” vs.”truth”—the law’s position. The distinction between “evidence for” a proposition and the “truth of” it is inscribed in the legal mind through professional training and experience.

Rule 401 of the Federal Rules of Evidence defines “relevance” as “any tendency” of an item of proof “to make a fact … of consequence” to the litigation either “more or less probable” in the estimation of the factfinder. In Bayesian terms, this position is equivalent to saying that an item of proof is “relevant” (and hence presumptively admissible; see Fed. R. Evid. 402) if, in relation to competing factual allegations, the likelihood ratio associated with that evidence is either less than or greater than 1 (Lempert 1977).  

Folksy idioms—e.g., “a brick is not a wall” [Rule 401, advisory committee notes])—are used to teach prospective lawyers that this “liberal” standard of admissibility does not depend on the power of a piece of evidence to establish a particular fact by the requisite standard of proof (“more probable than not” in civil cases; “beyond a reasonable doubt” in criminal cones).

Or in Bayesian terms, we would say that a properly trained legal reasoner does not determine “relevance” (and hence admissibility) by asking whether an item of proof will on its own generate a posterior estimate either for or against the “truth” of that fact. Again, because the process of proof is cumulative, the only thing that matters is that a particular piece of evidence have a likelihood ratio different from 1 in relation to competing litigation hypotheses.

2. “I don’t believe it . . . .” This popular response, among both pre- and post-publication peer reviewers, doesn’t get the distinction between “evidence for” and the “truth of” an empirical claim.

In Bayesian terms, the reviewer who treats his or her “belief” in the study result as informative is unhelpfully substituting his or her posterior estimate for an assessment of the likelihood ratio associated with the data. Who cares what the reviewer “believes”? Disagreement about the relative strength of competing hypotheses is, after all, the occasion for data collection! If a judge or lawyer can “get” that a “brick is not a wall,” then surely a consumer of empirical research can, too: the latter should be asking whether an empirical study has “any tendency … to make a fact … of consequence” to empirical inquiry either “more or less probable” in the estimation of interested scholars (this is primarily a question of the validity of the methods used and the probative weight of the study finding).

That is, the reviewer should have his or her eyes glued to  the likelihood ratio, and not be distracted by any particular researcher’s posterior.

3.  “Extraordinary claims require extraordinary proof . . . .” No, they really don’t.

This maxim treats the strength with which a fact is held to be true as a basis for discounting the likelihood ratio associated with contrary evidence. The scholar who takes this position is saying, in effect, “Your result should see the light of day only if it is so strong that it flips scholars from a state of disbelief to one of belief, or vice versa.” 

But in empirical scholarship as in law, “A brick is not a wall.”  We can recognize the tendency of a (valid) study result to make some provisional apprehension of truth less probable than it would otherwise be while still believing—strongly, even—that the contrary hypothesis so supported is unlikely to be true.

* * *

Or to paraphrase a maxim Feynman is sometimes (mis)credited with saying, “Shut up & update!”


Federal Rules of Evidence (2018) & Advisory Committee Notes.

Lempert, R.O. Modeling relevance. Michigan Law Review, 75, 1021-1057 (1977).



WSMD? JA! Who perceives risk in Artificial Intelligence & why?.. Well, here's a start

shit ...

This is approximately the 470,331st episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

How would you feel if I handed over the production of this Blog (including the drafting of each entry) to an artificially intelligent agent? (Come to think of it, how do you know I didn’t do this months or even years ago?)

I can’t answer with nearly the confidence that I’d like, but having looked more closely at some data, I think I know about 1500% more, and even better 1500% less,  about who fears artificial intelligence, who doesn’t, & why.

The data analysis was peformed in response to an WSMD? JA! query by @RossHartshorn, who asked:


In a follow up email, @Ross offered up his own set of the hypotheses, thereby furnishing me with a working conjecture to try to test with CCP data.

In all of the models that follow, I use the “Industrial Strength Risk Perception Measure” (ISRPM)—because that’s all I have got & because having that definitely gives me a  pretty damn good divining rod should I care to go out hunting for even more relevant data in future studies.

The story that the Figure above is trying to sell, essentially, is that, on their own, scores on the Ordinary Science Intelligence (OSI) assessment; on religiosity (measured with items on frequency of church attendance, frequency of prayer, and imporrtance of religion for life-- α =0.86); and on a right-left political outlook scale don't have much of a relationship with public perceptions of AI risks. 

But @Ross didn’t posit that these influences would have much impact “on their own.”  He predicted there’d be a likely interaction—that is, that each might exert some impact conditional on the level of the other.

This is what your brain looks like on polarizationThat’s an easy proposition to test w/ a regression model that contains the relevant predictors and their cross-product interaction term.

I also stuck Ordinary Science Intelligence into the mix because it seemed to me that it might interact, too, with the identity values—something that might suggest MS2R was afoot and possibly generating results that might support. (Sure wish the relevant dataset had the Science Curiosity Scale in it...)


So this is what that model tell us is going on, at least in this dataset.  (& yes, I did try to fit a model with a quadratic term for OSI--to try to catch its apparent nonlinearity in the Loess figure; it didn't improve the model fit.)

There’s no interaction between political outlook and religiosity.  Nor any between political outlooks conditional on either religiosity or OSI scores.

But religiosity and OSI do interact. Consider:

Yet they don’t interact in the manner that one would expect if MS2R were at work. Rather they interact in the manner one might expect OSI to work generally—viz., by closing rather than enlarging the polarization gap.

The effect appears to be pretty mild, but it still seems to capture something that @RH’s theorizing was on to—namely that indicators of cultural identity (like religion, political outlooks, cultural worldviews etc.) can interact with OSI in ways more subtle than either the “bounded rationality thesis” or MS2R might anticipate.

Maybe others will have their own stories, from which additional testable hypotheses might be extracted.

But this was an eye-opener for me, thanks to @R.

He definitely deserves some CCP swag!


Practicing what I teach -- or giving it the old college try . . .

A synopsis to be distributed before an upcoming talk:

Are smart people ruining democracy? What about curious ones?

Is political polarization over the reality of climate change, the efficacy of gun control, the safety of nuclear power, and other policy-relevant facts attributable to a simple deficit in public science literacy? Nope. I will review study results showing that polarization on complex factual issues rises in lockstep with culturally diverse citizens' capacity to comprehend scientific evidence generally. The talk will also review surprising evidence about how curiosity affects polarization -- but you'll have to come to the talk if you want to learn more about that!

So . . . get it? get it?


Cognitive Science of Religion & acceptance of evolution

Rational reconstruction of remarks at this eventSlides here.

1. The presentations today are united by a common framework: the Cognitive Science of Religion (CSR). CSR uses the concepts and methods of the decision sciences to enrich scholarly understanding of religious behavior (Barrett 2007; van Slyke 2016).

The CSR framework, however, does not uniquely determine any particular account of religiosity. Indeed, in my remarks I will describe a pair of opposing CSR research programs, and then use CSR to conduct a simple empirical test of their relative plausibility.

2. The first CSR program can be called the “bounded rationality theory of religion” (BRTR)*.  Drawing on dual process reasoning theories, BRTR attributes religiosity to an overreliance on heuristic, “System 1” information processing.  From childhood onward, we encounter functional objects—from pencils to automobiles, from roller skates to intercontinental ballistic missiles, etc.— that were purposefully constructed by an intelligent agent to advance her aims.  It is therefore quite natural (so to speak) to infer that functional objects in nature—from the stripes of zebras to the opposable thumbs of human beings—are likewise the inventions of a purposive, intelligent agent (e.g., God).

Overcoming this intuition is hard.  To do so, we must employ conscious, effortful “System 2” reasoning. This taxing form of information processing not only interrogates the System 1 intuition of intelligent design in nature; it also motivates us to attend to scientific evidence, which documents the contribution that mindless natural processes, tempered by natural selection, make to the order we observe in the cosmos (Shtulman, 2014; Bloom & Weisberg, 2007; Shtulman  & Calabi, 2012; Gervais 2015).

3. The second CSR program is the “expressive rationality theory of religion” (ERTR). ERTR resists the equation of religion with irrationality. Forming accurate perceptions of facts is not the only thing that people do with their reason; they also use it to secure and protect their status in groups, the members of which are united by their shared cultural values.  Religion is a font of such groups.  If their goal is to convey their membership in and loyalty to a religious community, it is perfectly rational for people to engage information in a manner that induces beliefs commonly associated with such a group (Kahan & Stanovich 2016; cf. Yonker et al. 2016).

People can be expected, moreover, to use both System 1 and System 2 reasoning to advance this end (cf. Yonker et al. 2017). Rather than use their cognitive proficiency to resist intuitions supportive of their religious identities, people can be expected to use their capacity for conscious, effortful information processing to ferret out evidence supportive of their groups’ positions and to explain away evidence that threatens to undermine the same. This is the same process (“Motivated System 2 Reasoning, or MS2R) that explains why the citizens who are most proficient in critical reasoning are also the most polarized on politically contested facts relating to climate change, gun control, nuclear power, etc.

4. Whether humans evolved from earlier species of primates is a central point of contention among (certain) religious groups, on the one hand, and secular ones, on the other.  Examining how this belief relates to religious identity and to cognitive reasoning proficiency thus furnishes an opportunity to test the relative plausibility of BRTR and ERTR.  .

In such a test, one administers to an appropriate sample (e.g., a large general population one) three types of measures.  The first consists of one or another cognitive proficiency assessment, such as the Cognitive Reflection Test (CRT) or the Ordinary Science Intelligence scale (OSI_2.0).  The second examines respondents’ religiosity (assessed with self-reported frequency of church attendance, frequency of prayer, importance of religion in life, etc.).  Finally one looks at the respondents’ “belief” in human evolution.

BRTR and ERTR.make opposing predictions about what we’ll see.  If, as BRTR conjectures, religious belief signifies over-reliance on heuristic System 1 reasoning, then acceptance of human evolution should increase as individuals become more proficient in, and more disposed to use, System 2 reasoning.  Religious respondents are much more likely than nonreligious ones to reject human evolution. But if BRTR is correct, then we ought to observe this gap shrink as religious respondents become progressively more proficient in cognition and hence progressively more likely to interrogate their intuition and more likely to consider the strength of the scientific evidence on humans’ natural history.

ERTR’s predictions are almost the opposite of BRTR’s.  If ERTR’s is correct that positions on human evolution are identity-expressive, then the gap between religious and nonreligious respondents should increase as religious and nonreligious respondents become more adept at alternatively crediting identity-affirming evidence and dismissing identity-threatening evidence on human evolution.  If that prediction is right, moreover, then we should expect these tendencies to largely cancel each other out when we investigate the impact of higher cognitive proficiency for the respondents as a whole.

The evidence clearly supports ERTR. As it predicted, religious and nonreligious subjects do not converge as their CRT scores increase. On the contrary, polarization increases.

The effect is even more dramatic when investigated in relation to scores on the OSI_2.0 assessment. Likely this is the case because CRT, as a result of its difficulty, fails to measure variance in half the population, whereas OSI_2.0 measures variance uniformly across the entire population.

In any case, because nonreligious subjects become more convinced and religious ones less as their cognitive proficiency increases, the net effect of higher CRT and OSI_2.0 scores is close to zero—another result that defies BRTR‘s, but not ERTR’s predictions.

Finally, we do a very compact experiment that corroborates these results.  Again, religious respondents are much less likely to accept human evolution than are nonreligious ones, a gap that only increases as cognitive reflection and science comprehension increase.  However, if we simply add to the evolution item (“Human beings, as we know them today, developed from earlier species of animals. (True/false)” the clause “According to the theory of evolution, . . .,” the gap between religious and nonreligious respondents nearly disappears, particularly among respondents who score the highest on cognitive proficiency scores (Kahan 2017). 

So religious respondents, it turns out, are as familiar as nonreligious ones with what science knows about human evolution.  All one has to do to coax that knowledge-revealing response out of them is remove the confound between an identity-affirming and a science-comprehending answer to the conventional evolution-acceptance question.   Nothing so extravagant as BRTR‘s dual-process theory explanation is needed to understand why religious and nonreligious respondents answer the standard item differently.

5.  But now for the meta-experiment: did you feel like you learned anything about religious belief or practice from this Cognitive-Science-of-Religion guided analysis of who believes what and why about human evolution? Do you think others would?

*I use the subscript “R” to emphasize that the “bounded rationality theory of religion” is related to but distinct from the forms of bounded rationality that generate group-based perceptions of risk and policy-related facts.


Barrett, J.L. Cognitive Science of Religion: What Is It and Why Is It Important? Religion Compass 1, 768-786 (2007).

Bloom, P. & Weisberg, D.S. Childhood origins of adult resistance to science. Science 316, 996-997 (2007).

Gervais, W.M. Override the controversy: Analytic thinking predicts endorsement of evolution. Cognition 142, 312-321 (2015).

Kahan, D.M. ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J Risk Res 20, 995-1016 (2017).

Kahan, Dan M. and Stanovich, Keith, Rationality and Belief in Human Evolution. Cultural Cognition/Annenberg Public Policy Center Working Paper No. 5. (September 14, 2016) Available at SSRN:

Shtulman, 2014; Bloom & Weisberg, 2007; Shtulman  & Calabi, 2012; Gervais 2015.

Shtulman, A. Science v. intuition: why it is difficult for scientific knowledge to take root. Skeptic (Altadena, CA) 19, 46-51 (2014).

Van Slyke, J.A. The cognitive science of religion (Routledge, 2016).

Yonker, J.E., Edman, L.R.O., Cresswell, J. & Barrett, J.L. Primed analytic thought and religiosity: The importance of individual characteristics. Psychology of Religion and Spirituality 8, 298-308 (2016).


A few more glossary entries: dual process reasoning; bounded rationality thesis; and C^4

I haven't had time to finish my "postcard" from Salt Lake City but here are some more entries for the glossary to tide you over:

Dual process theory/theories. A set of decisionmaking frameworks that posit two discrete modes of information processing: one (often referred to as “System 1”) that is rapid, intuitive, and emotion pervaded; and another (often referred to as “System 2”) that is deliberate, self-conscious, and analytical. [Sources: Kahneman, American Economic Review, 93(5), 1449-1475 (2003); Kahneman & Frederick in Morrison (Ed.), The Cambridge handbook of thinking and reasoning (pp. 267-293), Cambridge University Press. (2005); Stanovich & West, Behavioral and Brain Sciences, 23(5), 645-665 (2000). Added Jan. 12, 2018.]

Bounded rationality thesis (“BRT”). Espoused most influentially by Daniel Kahneman, this theory identifies over-reliance on heuristic reasoning as the source of various observed deficiencies (the availability effect; probability neglect; hindsight bias; hyperbolic discounting; the sunk-cost fallacy, etc.) in human reasoning under conditions of uncertainty. Nevertheless, BRT does not appear to be the source of cultural polariation over societal risks. On the contrary, such polarization has in various studies been shown to be the greatest in the individuals most disposed to resist the errors associated with heuristic information processing. [Sources: Kahan, Emerging Trends in the Social and Behavioral Sciences (2016); Kahneman, American Economic Review, 93(5), 1449-1475 (2003); Kahneman & Frederick in Morrison (Ed.), The Cambridge handbook of thinking and reasoning (pp. 267-293), Cambridge University Press. (2005); Kahneman, Slovic, & Tversky, A., Judgment Under Uncertainty: Heuristics and Biases, Cambridge ; New York: Cambridge University Press (1982). Added Jan. 12, 2018].

Cross-cultural cultural cognition (“C4”). Describes the use of the Cultural Cognition Worldview Scales to assess risk perceptions outside of the U.S. So far, the scales have been used in at least five nations other nations (England, Austria, Norway, Slovakia, and Switzerland). [CCP Bog, passim. Added Jan. 12, 2018.]



Hey, want to know someting? Science curiosity is a culturally random variable!


Motivated System 2 Reasoning (MS2R): a Research Program

Motivated System 2 Reasoning (MS2R): a Research Program

1. MS2R in general.  “Motivated System 2 Reasoning” (MS2R) refers to the affinity between cultural cognition and conscious, effortful information processing. 

In psychology, “dual process” theories distinguish betweeen two styles of reasoning.  The first, often denoted as “System 1,” is rapid, intuitive, and emotion pervaded. The other—typically denoted as “System 2”—is deliberate, conscious, and analytical. 

The core of an exceedingly successful research program, this conception of dual process reasoning has been shown to explain the prevelance of myriad reasoning biases. From hindsight bias to confirmation bias; from the gamblers fallacy to the sunk-cost fallacy; from probability neglect to the availability effect—all are psotively correlated with over-reliance on heuristic, System 1 reasoning.  By the same token, an ability and dispostition to rely instead on the conscious, effortful style associated with System 2 predicts less vulnerability to these cognitive miscues.

A species of motivated reasoning, cultural cognition refers to the tendency of individuals to selctively seek out and credit evidence in patterns that reflect the perception of risks and other policy-relevant facts associated with membership in their cultural group. Cultural cognition can generate  intense and enduring forms of cultural polarization where such groups subscribe to conflicting positions.

Because in such cases cultural cognition is not a truth-convergent form of information processing, it is perfectly plausible to suspectg that it is just another form of bias driven by overreliance on heuristic, System 1 information processing. 

But this conjecture turns out to be incorrect.

It’s incorrect not because cutlural cognition has no connection to System 1 styles of reasoning among individuals who are accustomed to this form of heuristic information processing. 

Rather it is wrong (demonstrably so) because cultural cognition does not abate as the ability and disposition to use System 2 styles of reasning increase.  On the contrary, those members of the public who are most proficient at System 2 reasoning are the most culturally polarized on societal risks such as the reality of climate change, the efficacy of gun control, the hazards of fracking, the safety of nuclear power generation, etc.

MS2R comprises the the cognitive mechanisms that account for this startling result.

2. First generation MS2R studies. Supported by a National Science Foundation grant (SES-0922714), the existence and dynamics of MS2R were established principally through three studies:

  • Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012). The study reported in this paper tested directly the competing hypotheses that polarization over climate change risks was associated with over-reliance on heuristic System 1  information processing and that such polarization was associated instead with science literacy and numeracy.  The first conjecture implied that as those apptitudes, which are associated with basic scientific reasoning proficiency, increased, polarization among competing groups should abate.  In fact, exactly the opposite occurred, a result consistent with the second conjecture, which predicted that those individuals most adept at System 2 information processing could be expected to use this reasoning proficiency to ferret out information supportive of their group’s respective positions and to rationalize rejection of the rest. These effects, moreover, were highest among subjects who themselves achieved the highest scores on the CRT test.

  • Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013). The experimental study in this paper demonstrated how proficiencies in cognitive reflection, the apptitude most commonly associated with use of System 2 information processing, magnified polarization over the validity of evidence of the relative closed-mindedness of individuals who took one or another position on the reality of human-caused climate change: where scores on the Cognitive Reflection Test were asserted to be higher among “climate skeptics,” ideologically right-leaning subjects found the evidence that the CRT predicts open-mindedness much more convincing than did individuals who were left-leaning in their political outlooks; where, in contrast, CRT scores were represented as being higher among “climate believers,” left-leaning subjects found the evidence of the validity of the CRT more convincing that did Republivcans.

  • Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017). This paper reports an experimental study on how numeracy interacts with cultural cognition.  Numeracy is an apptitude to reason well with quantitative data and to draw appropriate inferences about such information.  In the study, it was shown that individuals who score highest on a numeracy assessment test were again the most polarized, this time on the inferences to be drawn from data from a study on the impact of gun control: where the data, reported in a standard 2x2 contingency table supported the position associated with their ideologies  (either gun control reduces crime or gun control increaeses crime) subjects high in numeracy far outperformed their low-numeracy counterparts. But where that data supported an inference conterary to the position associated with subjects’ political predispositions, those highest in numeracy performed no better than their low-numeracy counterparts on the very same covariance-detection task.

3. Second generation studies.  The studies described above have given rise to multiple additonal studies seeking to probe and extend their results.  Some of these studies include:

3. Secondary sources describing MS2R




Weekend up(back) date: What is the American gun debate about?

From Kahan, D.M. & Braman, D. More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions. U. Pa. L. Rev. 151, 1291-1327 (2003) pp. 1291-92:

Few issues divide the American polity as dramatically as gun control. Framed by assassinations, mass shootings, and violent crime, the gun debate feeds on our deepest national anxieties. Pitting women against men, blacks against whites, suburban against rural, Northeast against South and West, Protestants against Catholics and Jews, the gun question reinforces the most volatile sources of factionalization in our political life. Pro and anticontrol forces spend millions of dollars to influence the votes of legislators and the outcomes of popular elec tions. Yet we are no closer to achieving consensus on the major issues today than we were ten, thirty, or even eighty years ago.

Admirably, economists and other empirical social scientists have dedicated themselves to freeing us from this state of perpetual contes tation. Shorn of its emotional trappings, the gun debate, they reason, comes down to a straightforward question of fact: do more guns make society less safe or more? Control supporters take the position that the ready availability of guns diminishes public safety by facilitating violent crimes and accidental shootings; opponents take the position that such availability enhances public safety by enabling potential crime vic tims to ward off violent predation. Both sides believe that “only em pirical research can hope to resolve which of the[se] . . . possible ef fects . . . dominate[s].”   Accordingly, social scientists have attacked the gun issue with a variety of empirical methods—from multivariate regression models  to contingent valuation studies  to public-health risk-factor analyses.

Evaluated in its own idiom, however, this prodigious investment of intellectual capital has yielded only meager practical dividends. As high-quality studies of the consequences of gun control accumulate in number, gun control politics rage on with unabated intensity. Indeed, in the 2000 election, their respective support for and opposition to gun control may well have cost Democrats the White House and Republicans control of the U.S. Senate.

Perhaps empirical social science has failed to quiet public dis agreement over gun control because empirical social scientists have not yet reached their own consensus on what the consequences of gun control really are. If so, then the right course for academics who want to make a positive contribution to resolving the gun control debate would be to stay the course—to continue devoting their energy, time, and creativity to the project of quantifying the impact of various gun control measures.

But another possibility is that by focusing on consequences narrowly conceived, empirical social scientists just aren’t addressing what members of the public really care about. Guns, historians and soci ologists tell us, are not just “weapons, [or] pieces of sporting equipment”; they are also symbols “positively or negatively associated with Daniel Boone, the Civil War, the elemental lifestyles [of] the frontier, war in general, crime, masculinity in the abstract, adventure, civic re sponsibility or irresponsibility, [and] slavery or freedom.”  It stands to reason, then, that how an individual feels about gun control will de pend a lot on the social meanings that she thinks guns and gun con trol express, and not just on the consequences she believes they im pose.  As one southern Democratic senator recently put it, the gun debate is “about values”—“about who you are and who you aren’t.”  Or in the even more pithy formulation of another group of politically minded commentators, “It’s the Culture, Stupid!”


Do you see an effect here? Some data on correlation of cognitive reflection with political outlooks

It couldn't last. The "asymmetry thesis" is again sucking me into the vortex....

John Jost included one of my papers in his meta-analysis of research on conservatives' cognitive style, including cognitive reflection.  But I have many many datasets with these data in them, and I would have been happy to furnish the essential details with him had he asked me.

Anyway, here are some more findings that support the conclusion that the relationship between CRT scores and ideology is only trivially different from zero:

If one gets a meaningful effect using a convenience sample, then the sample probably is not valid for trying to draw population-level inferences.

Gives me a chance to renew the question, too, about whether probability density distributions of the sort generated by ggplot are a good way to present this sort of info. 


Do conservatives become more concerned with climate risks as their trust in science increases?

It is almost universally assumed that political polarization over societal risks like climate change originate in different levels of trust in scientists: left-leaning people believe in human-caused climate change, it is said, because they have a greater degree of confidence in scientists; so-called “conservative Republicans," in contrast, are said distrust of science and scientists and thus are predisposed to climate skepticism.

But is this right? Or are we looking at another form of the dreaded WEKS disease?

Well, here’s a simple test based on GSS data.

Using the 2010 & 2016 datasets (the only years in which the survey included the climate-risk outcome variable), I cobbled together a decent “trust in science” scale:

scibnfts5: “People have frequently noted that scientific research has produced benefits and harmful results. Would you say that, on balance, the benefits of scientific research have outweighed the harmful results, or have the harmful results of scientific research been greater than its benefits?” [5 pt: strongly in favor beneficial . . .strongly in favor of harmful results.”)

consci: “As far as the people running [the science community] are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them,”

scientgo: “Scientific researchers are dedicated people who work for the good of humanity.” [4 points: strongly agree . . . strongly disagree)

scienthe: “Scientists are helping to solve challenging problems.” [4 points: strongly agree . . . strongly disagree)

nextgen: “Because of science and technology, there will be more opportunities for the next generation” [4 points: strongly agree . . . strongly disagree”]

advfont.  “Even if it brings no immediate benefits, scientific research that advances the frontiers of knowledge is necessary and should be supported by the federal government.” [4 points: strongly agree . . . strongly disagree”]

scientbe. “Most scientists want to work on things that will make life better for the average person.” [4 points: strongly agree . . . strongly disagree”]

These items formed a single factor and had a Cronbach’s α score of 0.72.  Not bad. I also reverse coded as necessary so that for every item a higher score would denote more rather than less trust of science.

Surprisingly, the GSS has never had a particularly good set of climate-change “belief” and risk perception items. Nevertheless, they have sometimes fielded this question: 

TEMPGEN: “In general, do you think that a rise in the world's temperature caused by the `greenhouse effect', is exptremely dangers for the evironment . . . not dangerous at all for the environment?” [5 points: “exptremely dangers for the evironment . . . not dangerous at all for the environment?”]

I don’t love this item but it is a cousin of the revered Industrial Strength Risk Perception Measure, so I decided I’d give it a whirl. 

I then did some regressions (after of course, eyeballing the raw data).

In the first model, I regressed a reverse-coded TEMPGEN on the science-trust scale and “left_right,” a composite political outlook scale formed by aggregating the study participants’ self- (α= 0.66 ).  As expected, higher scores on the science-trust scale predicted responses of “very dangerous” and “extremely dangers,” while left_right predicted responses of “not very dangerous” and “not dangerous at all.”

If one stops there, the result is an affirmation of  the common wisdom.  Both political outlooks and trust in science have the signs one would expect, and if one were to add their coefficients, one could make claims about how much more likely relatively conservative respondents would be to see greater risk if only they could be made to trust science more.

But this form of analysis is incomplete.  In particular, it assumes that the contribution trust in science and left_right make to perceptions of the danger of climate change are (once their covariance is partialed out) independent and linear and hence additive.

But why assume that trust in science has the same effect regardless of respondents’ ideologies? After all, we know that science comprehension’s impact on perceived climate-change risks varies in relation to ideology, magnifying polarization.  Shouldn’t we at least check to see if there is a comparable  interaction between political outlooks and trust?

So I created a cross-product interaction term and added it to form another regression model.  And sure enough, there was an interaction, one predicting in particular that we ought to expect even more partisan polarization as right- and left-leaning individuals' scores on the trust-in-science scale increased.

Here’s what the interaction looks like:

Geez!  Higher trust promotes greater risk concern for left-leaning respondents but has essentially no effect whatsoever on right-leaning ones.

What to say?...

Well one possibility that occurs to me is based on biased perceptions of scientific consensus.  Experimental data suggest that ordinary persons of diverse outlooks are more likely to notice, assign significance to, and recall instances in which a scientist  took the position consistent with their cultural group's than ones in which a scientist took the opposing position.  As a result, people end up with mental inventories of expert opinion skewed toward the position that predominates in their group. If that's how they perceive the weight of expert opinion, why would they distrust scientists?

But I dunno. This is just post hoc speculation.

Tell me what you think the answer is – and better still, how one could design an experiment to test your favored conjecture against whatever you think the second most likely answer is.


Last session in Science of Science Communication 2017

Not usually where we end, but frolicks & detours along the way were worthwhile


*Now* where am I? Oklahoma City!

Am heading out early (today) to see what cool things the researchers at OU Center for Risk and Crisis Management are up to!

Will send postcards.


What does "believing/disbelieving in" add to what one knows is known by science? ... a fragment

From something I'm working on (and related to "yesterday's" post) . . .

4.3. “Believing in” what one knows is known by science

People who use their reason to form identity-expressive beliefs can also use it to acquire and reveal knowledge of what science knows. A bright “evolution disbelieving” high school student intent on being admitted to an undergraduate veterinary program, for example, might readily get a perfect score on an Advanced Placement biology exam (Herman 2012).

It’s tempting, of course, to say that the “knowledge” one evinces in a standardized science test is analytically independent of one's “belief” in the propositions that one “knows.”  This claim isn’t necessarily wrong, but it is highly likely to reflect confusion.  

Imagine a test-taker who says, “I know science’s position on the natural history of human beings: that they evolved from an earlier species of animal. And I’ll tell you something else: I believe it, too.”  What exactly is added by that person’s profession of belief?

The answer “his assent to a factual proposition about the origin of our species” reflects confusion. There is no plausible psychological picture of the contents of the human mind that sees it as containing a belief registry stocked with bare empirical propositions set to “on-off,” or even probabilistic “pr=0.x,” states.  Minds consist of routines—clusters of affective orientations, conscious evaluations, desires, recollections, inferential abilities, and the like—suited for doing things.  Beliefs are elements of such clusters. They are usefully understood as action-enabling states—affective stances toward factual propositions that reliably summon the mental routine geared toward acting in some way that depends on the truth of those propositions (Peirce 1877; Braithwaite 1933, 1946; Hetherington 2011)

In the case of our imagined test-taker, a mental state answering to exactly this description contributed to his supplying the correct response to the assessment item.  If that’s the mental object the test-taker had in mind when he said, “and I believe it, too!,” then his profession of belief furnished no insight into the contents of his mind that we didn’t already have by virtue of his answering the question correctly. So “nothing” is one plausible answer to the question what did it add when he told us he “believed” in evolution.

It’s possible, though, that the statement did add something.  But for the reasons just set forth, the added information would have to relate to some additional action that is enabled by his holding such a belief. One such thing enabled by belief in evolution is being a particular kind of person.  Assent to science’s account of the natural history of human beings has a social meaning that marks a person out has holding certain sorts of attitudes and commitments; a belief in evolution reliably summons behavior evincing such assent on occasions in which a person has a stake in experiencing that identity or enabling others to discern that he does.

Indeed, for the overwhelming majority of people who believe in evolution, having that sort of identity is the only thing they are conveying to us when they profess their belief. They certainly aren’t revealing to us that they possess the mental capacities and motivations necessary to answer even a basic high-school biology exam question on evolution correctly: there is zero correlation between professions of belief and even a rudimentary understanding of random mutation, natural variance, and natural selection (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990).

Precisely because one test-taker’s profession of “belief” adds nothing to any assessment of knowledge of what science knows, another's profession of “disbelief” doesn’t subtract anything.  One who correctly answers the exam question has evinced not only knowledge but also her possession of the mental capacities and motivations necessary to convey such knowledge

When a test-taker says “I know what science thinks about the natural history of human beings—but you better realize, I don’t believe it,” then it is pretty obvious what she is doing: expressing her identity as a member of a community for whom disbelief is a defining attribute. The very occasion for doing so might well be that she was put in a position where revealing of her knowledge of what science knows generated doubt about who she is

But it remains the case that the mental states and motivations that she used to learn and convey what science knows, on the one hand, and the mental states and motivations she is using to experience a particular cultural identity, on the other, are entirely different things (Everhart & Hameed 2013; cf. DiSessa 1982).  Neither tells us whether she will use what evolution knows to do other things that can be done only with such knowledge—like become a veterinarian, say, or enjoy a science documentary on evolution (CCP 2016). To figure out if she believes in evolution for those purposes—despite her not believing in it to be who she is—we’d have to observe what she does in the former settings.

All of these same points apply to the response that study subjects give when they respond to a valid measure of their comprehension of climate science.  That is, their professions of “belief” and “disbelief” in the propositions that figure in the assessment items neither add to nor subtract from the inference that they have (or don’t have) the capacities and motivations necessary to answer the question correctly.  Their respective professions  tell us only who they are. 

As expressions of their identities, moreover, their respective professions of “belief” and “disbelief” don’t tell us anything about whether they possess the “beliefs” in human-caused climate change requisite to action informed by what science knows. To figure out if a climate change “skeptic” possesses the action-enabling belief in climate change that figures, say, in using scientific knowledge to protect herself from the harm of human-caused climate change, or in voting for a member of Congress (Republican or Democrat) who will in fact expend even one ounce of political capital pursuing climate-change mitigation policies, we must observe what that skeptical individual does in those settings.  Likewise, only by seeing what a self-proclaimed climate-change believer does in those same settings can we see if he possess the sort of action-enabling belief in human-caused climate change that using science knowledge for those purposes depends on.


Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

CCP, Evidence-based Science Filmmaking Inititive, Study No. 1 (2016)

Demastes, S.S., Settlage, J. & Good, R. Students' conceptions of natural selection and its role in evolution: Cases of replication and comparison. Journal of Research in Science Teaching 32, 535-550 (1995).

DiSessa, A.A. Unlearning Aristotelian Physics: A Study of Knowledge‐Based Learning*. Cognitive science 6, 37-75 (1982).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).

Peirce, C.S. The Fixaation of Belief. Popular Science Monthly 12, 1-15 (1877).

"According to climate scientists ..." -- WTF?! (presentation summary, slides)

Gave talk at the annual Association for Psychological Science on Sat.  Was on a panel that featured great presentations by Leaf Van Boven, Rick Larrick & Ed O'Brien. Maybe I'll be able to induce them to do short guest posts on their presentations, although understandably, they might be shy about become instant world-wide celebrities by introducing their work to this sites 14 bilion readers.

Anyway, my talk was on the perplexing, paradoxical effect of "according to climate scientists" or ACS prefix (slides here).

As 6 billion of the readers of this blog know-- the other 8 have by now forgotten b/c of all the other cool things that have been featured on the blog since the last time I mentioned this--attributing positions on the contribution of human beings to global warming, and the consequences thereof, to "climate scientists" magically dispels polarization on responses to cliimate science literacy questions.

Here's what happens when "test takers" (members of a large, nationally representative sample) respond to two such items that lack the magic ACS prefix:

Now, compare what happens with the ACS prefix:

Does this make sense?

Sure. Questions that solicit respondents’ understanding of what scientists believe about the causes and consequences of human-caused global warming avoid forcing individuals to choose between answers that reveal what they know about what science knows, on the one hand, and ones that express who they are as members of cultural groups, on the other.

Here's a cool ACS prefix corollary:

Notice that the "Nuclear power" question was a lot "harder" than the "Flooding" one once the ACS prefix nuked (as it were) the identity-knowledge confound.  Not surprisingly, only respondents who scored the highest on the Ordinary Science Intelligence assessment were likely to get it right.

But notice too that those same respondents--the ones highest in OSI--were also the most likely to furnish the incorrect identity-expressive responses when the ACS prefix was removed.

Of course! They are the best at supplying both identity-expressive and  science-knowledge-revealing answers.  Which one they supply depends on what they are doing: revealing what they know or being who they are. 

The ACS prefix is the switch that determines which of those things they use their reason for.

Okay but what about this: do rspts of opposing political ordinations agree on whether climate scientists agree on whether human-caused climate change is happening?

Of course not!

In modern liberal democratic societies, holding beliefs contrary to the best available scientific evidence is universally understood to be a sign of stupidity. The cultural cogniton of scientific consensus describes the psychic pressure that members of all cultural groups experience, then, to form and persist in the belief that their group’s position on a culturally contested issue is consistent with the best avaialbel scientific evidence.

But that's what creates the "WTF moment"-- also known as a "paradox":

Um ... I dunno!

That's what I asked the participants--my fellow panelists and the audience members (there were only about 50,000 people, because were scheduled against some other pretty cool panels) to help me figure out!

They had lots of good conjectures.

How about you?


Another “Scraredy-cat risk disposition”™ scale "booster shot": Childhood vaccine risk perceptions

You saw this coming I bet.

I would have presented this info in "yesterday's" post but I'm mindful of the groundswell of anxiety over the number of anti-BS inoculations that are being packed into a single data-based booster shot, so I thought I'd space these ones out.

"Yesterday," of course, I introduced the new CCP/Annenberg Public Policy Center “Scaredy-cat risk disposition”™ measure.  I used it to help remind people that the constant din about "public conflict" over GM food risks--and in particular that GM food risks are politically polarizing-- is in fact just bull shit.  

The usual course of treatment to immunize people against such bull shit is just to show that it's bull shit.  That goes something  like this:


The  “Scraredy-cat risk disposition”™  scale tries to stimulate people’s bull shit immune systems by a different strategy. 

Rather than showing that there isn’t a correlation between GM food risks and any cultural disposition of consequence (political orientation is just one way to get at the group-based affinities that inform people’s identities; religiosity, cultural worldviews, etc.,  are others—they all show the same thing w/r/t GM food risk perceptions), the  “Scraredy-cat risk disposition”™ scale shows that there is a correlation between it and how afraid people (i.e, the 75%-plus part of the population that has no idea what they are being asked about when someone says, “are GM foods safe to eat, in your opinion?”) say they are of GM foods and how afraid they are of all sorts of random ass things (sorry for technical jargon) including,

  • Mass shootings in public places

  • Armed carjacking (theft of occupied vehicle by person brandishing weapon)

  • Accidents occurring in the workplace

  • Flying on a commercial airliner

  • Elevator crashes in high-rise buildings

  • drowning of children in swimming pools

A scale comprising these ISRPM items actually coheres!

But what a high score on it measures, in my view, is a not a real-world disposition but a survey-artifact one that reflects a tendency (not a particularly strong one but one that really is there) to say “ooooo, I’m really afraid of that” in relation to anything a researcher asks about.

The “Scraredy-cat risk disposition”™  scale “explains” GM food risk perceptions the same way, then, that it explains everything,

which is to say that it doesn’t explain anything real at all.

So here’s a nice Bull Shit test.

If variation in public risk perceptions are explained just as well or better by scores on the “Scraredy-cat risk disposition”™  scale than by identity-defining outlooks & other real-world characteristics known to be meaningfully related to variance in public perceptions of risk, then we should doubt that there really is any meaningful real-world variance to explain. 

Whatever variance is being picked up by these legitimate measures is no more meaningful than the variance picked up by a randm-ass noise detector. 

Necessarily, then whatever shred of variance they pick up, even if "statistically significant" (something that is in fact of no inferential consequence!) cannot bear the weight of sweeping claims about who— “dogmatic right wing authoritarians,” “spoiled limousine liberals,” “whole foodies,” “the right,” “people who are easily disgusted” (stay tuned. . .), “space aliens posing as humans”—etc. that commentators trot out to explain a conflict that exists only in “commentary” and not “real world” space.

Well, guess what? The “Scraredy-cat risk disposition”™  scale “explains” childhood vaccine risk perceptions as well as or better than the various dispositions people say “explain” "public conflict" over that risk too.

Indeed, it "explains" vaccine-risk perceptions as well (which is to say very modestly) as it explains global warming risk percepitons and GM food risk perceptions--and any other goddam thing you throw at it.

See how this bull-shit immunity booster shot works?

The next time some know it all says, "The rising tide of anti-vax sentiment is being driven by ... [fill in bull shit blank]," you say, "well actually, the people responsible for this epidemic of mass hysteria are the ones who are worried about falling down elevator shafts, being the victim of a carjacking [how 1980s!], getting flattened by the detached horizontal stabilizer of a crashing commercial airliner, being mowed down in a mass shooting, getting their tie caught in the office shredder, etc-- you know those guys!  Data prove it!"

It's both true & absurd.  Because the claim that there is meaningful public division over vaccine risks is truly absurd: people who are concerned about vaccines are outliers in every single meaningful cutlural group in the U.S.

Click to see "falling" US vaccination rates...Remember, we have had 90%-plus vaccinate rates on all childhood immunizations for well over a decade.

Publication of the stupid Wakefield article had a measurable impact on vaccine behavior in the UK and maybe elsewhere (hard to say, b/c on the continent in Europe vaccine rates have not been as high historically anyway), but not the US!  That’s great news!

In addition, valid opinion studies find that the vast majority of Americans of all cultural outllooks (religious, political, cultural, professional-sports team allegiance, you name it) think childhood vaccines are the greatest invention since . . . sliced GM bread!  (Actually, wheat farmers, as I understand it, don’t use GMOs b/c if they did they couldn’t export grain to Europe, where there is genuine public conflict over GM foods).

Yes, we do have pockets of vaccine-hesitancy and yes they are a public health problem.

But general-population surveys and experiments are useless for that—and indeed a wast of money and attention.  They aren't examining the right people (parents of kids in the age range for universal vaccination).  And they aren't using measures that genuine predict the behavior of interest.

We should be developing (and supporting researchers doing the developing of) behaviorally validated methods for screening potentially vaccine  hesitant parents and coming up with risk-counseling profiles speciifically fitted to them.

And for sure we should be denouncing bull shit claims—ones typically tinged with group recrimination—about who is causing the “public health crisis” associated with “falling vaccine rates” & the imminent “collapse of herd immunity,” conditions that simply don’t exist. 

Those claims are harmful because they inject "pollution" into the science communication environment including  confusion about what other “ordinary people like me” think, and also potential associations between positions that genuinely divide people—like belief in evolution and positions on climate change—and views on vaccines. If those take hold, then yes, we really will have a fucking crisis on our hands.

If you are emitting this sort of pollution, please just stop already!

And the rest of you, line up for a  “Scraredy-cat risk disposition”™  scale booster shot against this bull shit. 

It won’t hurt, I promise!  And it will not only protect you from being misinformed but will benefit all the rest of us too by helping to make our political discourse less hospitable to thoughtless, reckless claims that can in fact disrupt the normal processes by which free, reasoning citizens of diverse cultural outlooks converge on the best available evidence.

On the way out, you can pick up one of these fashionable “I’ve been immunized by  the ‘Scraredy-cat risk disposition’™  scale against evidence-free bullshit risk perception just-so stories” buttons and wear it with pride!


Weekend update: Priceless



Weekend update: modeling the impact of the "according to climate scientists prefix" on identity-expressive vs. science-knowledge revealing responses to climate science literacy items

I did some analyses to help address issues that arose in an interesting discussion with @dypoon about how to interpret the locally weighted regression outputs featured in "yesterday's" post. 

Basically, the question is what to make of the respondents at the very highest levels of Ordinary Science Intelligence

When the prefix "according to climate scientists" is appended to the items, those individuals are the most likely to get the "correct" response, regardless of their political outlooks. That's clear enough.

It's also bright & clear that when the prefix is removed, subjects at all levels of OSI are more disposed to select the identity-expressive answer, whether right or wrong. 

What's more those highest in OSI seem even more disposed to select the identity-expressive "wrong" answer than those modest in that ability.  Insfar as they are the ones most capable of getting the right answer when the prefix is appended, they necessarily evince the strongest tendency to substitute the incorrect identity-expressive for the correct, science-knowledge-evincing response when the prefix is removed.

But are those who are at the very tippy top of the OSI hierarchy resisting the impulse (or the consciously perceived opportunity) to respond in an identity-protective manner--by selecting the incorrect but ideologically congenial answer-- when the prefix is removed?  Is that what the little little upward curls mean at the far right end of the dashed line for right-leaning subjects in "flooding" and for left-leaning ones in "nuclear"?

@Dypoon seems to think so; I don't.  He/she sensed signal; I caught the distinct scent of noise.

Well, one way to try to sort this out is by modeling the data.

The locally weighted regression just tells us the mean probabilities of "correct" answers at tiny little increments of OSI. A logistic regression model can show us how the precision of the estimated means--the information we need to try to ferret out signal from noise-- is affected by the number of observations, which necessarily get smaller as one approaches the upper end of the Ordinary Science Intelligence scale.

Here are a couple of ways to graphically display the models (nuclear & flooding). 

This one plots the predicted probability of correctly answering the items with and without the prefix for subjects with the specified political orientations as their OSI scores increase: 


This one illustrates, again in relation to OSI, how much more likely someone is to select the incorrect, identity-expressive response for the no-prefix version than he or she is to select the incorrect response for the prefix version:

The graphic shows us just how much the confounding of identity and knowledge in a survey item can distort measurement of how likely an individual is to know climate-science propositions that run contrary to his or her ideological predisposition on global warming.

I think the results are ... interesting.

What do you think?

To avoid discussion forking (the second leading cause of microcephaly in the Neterhlands Antilles), I'm closing off comments here.  Say your piece in the thread for "yesterday's" post.


Replicate "Climate-Science Communication Measurement Problem"? No sweat (despite hottest yr on record), thanks to Pew Research Center!

One of the great things about Pew Research Center is that it posts all (or nearly all!) the data from its public opinion studies.  That makes it possible for curious & reflective people to do their own analyses and augment the insight contained in Pew's own research reports. 

I've been playing around with the "public" portion of the "public vs. scientists" study, which was issued last January (Pew 2015). Actually Pew hasn't released the "scientist" (or more accurately, AAAS membership) portion of the data. I hope they do!

But one thing I thought it would be interesting to do for now would be to see if I could replicate the essential finding from "The Climate Science Communication Measurement Problem" (2015)

In that paper, I presented data suggesting, first, that neither "belief" in evolution nor "belief" in human-caused climate change were measures of general science literacy.  Rather both were better understood as measures of forms of "cultural identity" indicated, respectively, by items relating to religiosity and items relating to left-right political outlooks.

Second, and more importantly, I presented data suggesting hat there is no relationship between "belief" in human-caused climate change & climate science comprehension in particular. On the contrary, the higher individuals scored on a valid climate science comprehension measure (one specifically designed to avoid the confound between identity and knowledge that confounds most "climate science literacy" measures), the more polarized the respondents were on "belief" in AGW--which, again, is best understood as simply an indicator of "who one is," culturally speaking.

Well, it turns out one can see the same patterns, very clearly, in the Pew data.

Patterned on the NSF Indicators "basic facts" science literacy test (indeed, "lasers" is an NSF item), the Pew battery consists of six items:

As I've explained before, I'm not a huge fan of the "basic facts" approach to measuring public science comprehension. In my view, items like these aren't well-suited for measuring what a public science comprehension assessment ought to be measuring: a basic capacity to recognize and give proper effect to valid scientific evidence relevant to the things that ordinary people do in their ordinary lives as consumers, workforce members, and citizens.

One would expect a person with that capacity to have become familiar with certain basic scientific insights (earth goes round sun, etc.) certainly.  But certifying that she has stocked her "basic fact" inventory with any particular set of such propositions doesn't give us much reason to believe that she possesses the reasoning proficiencies & dispositions needed to augment her store of knowledge and to appropriately use what she learns in her everyday life.

For that, I believe, a public science comprehension battery needs at least a modest complement of scientific-thinking measures, ones that attest to a respondent's ability to tell the difference between valid and invalid forms of evidence and to draw sound inferences from the former.  The "Ordinary Science Intelligence" battery, used in the Measurement Problem paper, includes "cognitive reflection" and "numeracy"modules for this purpose.

Indeed, Pew has presented a research report on a more fulsome science comprehension battery that might be better in this regard, but it hasn't released the underlying data for that one.

Psychometric properties of Pew science literacy battery--click on it, c'mon!But anyway, the new items that Pew included in its battery are more current & subtle than the familiar Indicator items, & the six-member Pew group form a reasonably reliable (α = 0.67), one dimensional scale-- suggesting they are indeed measuring some sort of science-related apptitude.

But the fun stuff starts when one examines how the resulting Pew science literacy scale relates to items on evolution, climate change, political outlooks, and religiosity.

For evolution, Pew used it's two-part question, which first asks whether the respondent believes (1) "Humans and other living things have evolved over time" or (2) "Humans and other living things have existed in their present form since the beginning of time." 

Subjects who pick (1) then are asked whether (3) "Humans and other living things have evolved due to natural processes such as natural selection" or (4) "A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today."

Basically, subjects who select (2) are "new earth creationists." Subjects who select (4) are generally regarded as believing in "theistic evolution."  Intelligent design isn't the only variant of "theistic evolution," but it is certainly one of the accounts that fit this account.

Only subjects who select (3)-- "humans and other living things have evolved due to natural processes such as natural selection" -- are the only ones furnishing the response that reflects science's account of the natural history of humans. 

So I created a variable, "evolution_c," that reflects this answer, which was in fact selected by only 35% of the subjects in Pew's U.S. general public sample.

On climate change, Pew assessed (using two items that tested for item order/structure effects that turned out not to matter) whether subjects believed (1) "the earth is getting warmer mostly because of natural patterns in the earth’s environment," (2) "the earth is getting warmer mostly because of human activity such as burning fossil fuels," or (3) "there is no solid evidence that the earth is getting warmer."

About 50% of the respondents selected (2).  I created a variable, gw_c, to reflect whether respondents selected that response or one of the other two.

For political orientations, I combined the subjects responses to a 5-point liberal-conservative ideology item and their responses to a 5-point partisan self-identification item (1 "Democrat"; 2 "Independent leans Democrat"; 3 "Independent"; 4 "Independent leans Republican"; and 5 "Republican").  The composite scale had modest reliability (α = 0.61).

For religiosity, I combined two items.  One was a standard Pew item on church attendance. The other was a dummy variable, "nonrelig," scored "1" for subjects who said they were either "atheists," "agnostics" or "nothing in particular" in response to a religious-denomination item (α = 0.66).

But the very first thing I did was toss all of these items -- the 6 "science literacy" ones, belief in evolution (evolution_c), belief in human-caused climate change (gw_c), ideology, partisan self-identification, church attendance, and nonreligiosity--into a factor analysis (one based on a polychoric covariance matrix, which is appropriate for mixed dichotomous and multi-response likert items).

Click for closer look-- if you dare....

Not surprisingly, the covariance structure was best accounted for by three latent factors: one for science literacy, one for political orientations, and one for religiosity.

But the most important result was that neither belief in evolution nor belief in human-caused climate change loaded on the "science literacy" factor.  Instead they loaded on the religiosity and right-left political orientation factors, respectively.

This analysis, which replicated results from a paper dedicated solely to examinging the properties of the Ordinary Science Intelligence test, supports the inference that belief in evolution and belief in climate Warning: Click only if psychologically prepared to see shocking cultural bias in "belief in evolution" as science literacy assessment item! change are not indicators of "science comprehension" but rather indicators of cultural identity, as manifested respectively by political outlooks and religiosity.

To test this inference further, I used "differential item function" or "DIF" analysis (Osterlind & Everson, 2009).

Based on item response theory, DIF examines whether a test item is "culturally biased"--not in an animus sense but a measurement one: the question is whether the responses to the item measure the "same" latent proficiency (here, science literacy) in diverse groups.  If it doesn't-- if there is a difference in the probability that members of the two groups who have equivalent science literacy scores will answer it "correctly"--then administering that question to members of both will result in a biased measurement of their respective levels of that proficiency.

In Measurement Problem, I used DIF analysis to show that belief in evolution is "biased" against individuals who are high in religioisity. 

Using the Pew data (regression models here), one can see the same bias:

The latter but not the former are likely to indicate acceptance of science's account of the natural history of humans as their science literacy scores increase. This isn't so for other items in the Pew science literacy battery (which here is scored used using an item response theory model; the mean is 0, and units are standard deviations). 

The obvious conclusion is that the evolution item isn't measuring the same thing in subjects who are relatively religious and nonreligious as are the other items in the Pew science literacy battery. 

In Measurement Problem, I also used DIF to show that belief in climate change is a biased (and hence invalid) measure of climate science literacy.  That analysis, though, assessed responses to a "belief in Warning: Graphic demonstration of cultural bias in standardized assessment item. Click only if 21 yrs or older or accompanied by responsible adult or medical professional.climate change" item (one identical to Pew's) in relation to scores on a general climate-science literacy assessment, the "Ordinary Climate Science Intelligence" (OCSI) assesssment.  Pew's scientist-AAAS study didn't have a climate-science literacy battery.

Its general science literacy battery, however, did have one climate-science item, a question of theirs that in fact I had included in OCSI: "What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it Carbon dioxide, Hydrogen, Helium, or Radon?" (CO2).

Below are the DIF item profiles for CO2 and gw_c (regression models here). Regardless of their political outlooks, subjects become more likely to get CO2 correctly as their science literacy score increases--that makes perfect sense!

But as their science literacy score increases, individuals of diverse political outlooks don't converge on "belief in human caused climate change"; they become more polarized.  That question is measuring who the subjects are, not what they know about about climate science.

So there you go!

I probably will tinker a bit more with these data and will tell you if I find anything else of note.

But in the meantime, I recommend you do the same! The data are out there & free, thanks to Pew.  So reciprocate Pew's contribution to knowledge by analyzing them & reporting what you find out!


Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Osterlind, S. J., & Everson, H. T. (2009). Differential item functioning. Thousand Oaks, CA: Sage.

Pew Research Center (2015). Public and Scientists' Views on Science and Society.


We are *all* Pakistani Drs/Kentucky Farmers, Part 2: Kant's perspective(s)

This is an excerpt from another bit of correspondence with a group of very talented and reflective scholars who are at the beginning of an important research program to explain "disbelief in" human evolution. In addition, because "we may [must] regard the present state of the universe as the effect of its past and the cause of its future," this post is also a companion to yesterday's, which responded to  Adam Laats' request for less exotic (or less exotic seeming) examples of people using cognitive dualism than furnished us by the Pakistani Dr & the Kentucky Farmer. No doubt it will be the progenitor of "tomorrow's" post too; but you know that will say more about me than it does about the "Big Bang...."

I agree of course that figuring out what people "know" about the rudiments of evolutionary science has to be part of any informative research program here.  But I understand your project to be how to "explain nonacceptance" of or "disbelief in" what is known.

So fine, go ahead and develop valid measures for assessing evolutionary science knowledge. But don't embark on the actual project until you have answered the question the unreflective disregard of which is exactly what has rendered previous “nonacceptance” research programs so utterly unsatifactorywhat is it exactly that is being explained?

Isn't the Pakistani Dr's (or the Kentucky Farmer's or Krista's) "cognitive dualism" just a special instance of the perspectival dualism that Kant understands to be integral to human reason?

In the Groundwork for the Metaphysics of Morals and in both the 1st and 2d Critiques, Kant distinguishes two “self” perspectives: the phenomenal one, in which which we regard ourselves and all other human beings, along with everything else in the universe, to be subjects to immutable and determinstic laws of nature; and the “noumenal” one, in which we regard ourselves (and all other human beings) as possessing an autonomous will that prescribes laws for itself independently of nature so conceived.  

No dummy, Kant obviously can see the "contradictory" stances on human autonomy embodied in the perspectives of our "phenomenal" and "nouemenal" (not to be confused w/ the admittedly closely related "Neumenal") selves.

But he is not troubled by it.

The respective “beliefs” about human autonomy associated with the phenomenal and noumenal perspectives are, for him, built-in components of mental routines that enable the 2 things reasoning beings use their reason for: to acquire knowledge of how the world works; and to live a meaningful life within it.

Because there’s no contradiction between these reason-informed activities, there’s no practical—no experienced, no real -- contradiction between the sets of action-enabling mental states associated with  them.

Obviously, Kant's dualism has a very big point of contact with debates about "free will" & "determinism," and the coherence of "compatibilist" solutions, and whatnot.  

But as I read Kant, his dualism implies these debates are ill-formed. The participants in them are engaging the question whether human beings are subject to deterministic natural laws in a manner that abstracts from from what the answer allows reasoning people to do.

That feature of the "determinism-free will" debate renders it "metaphysical" -- not in the sense Kant had in mind but in the sense sense that logical positivist philosophers did when they tried to clear from the field of science entangling conceptualist underbrush that served no purpose except to trip people up as they tried to advance knowledge by ordered and systematic thinking.

I strongly suspect that those who have dedicated their scholarly energy to "solving" the "problem" of "why the presentation of evolution in class frequently does not achieve acceptance of the evolutionary theory" among students who display comprehension of it are mired in exactly that sort of thicket.

Both the Pakistani Dr and Krista "reject" human evolution in converging with other free, reasoning persons on a particular shared account of what makes life meaningful.  They then both turn around and use evolutionary science (including its applicability to human beings because it simply "doesn't work," they both agree, to exempt human speciation from evolutionary dynamics—just as it doesn't work to exempt human beings from natural necessity generally if one is doing science) when they use their reason to be members of science-trained professions, the practice of which is enabled by evolutionary science.

In behaving in this way, they are doing nothing different from what any scientist or any other human being does in adopting Kant's "phenomenal perspective" to know what science knows about the operation of objects in the world while adopting Kant's "nouemanal one" to live meaningful lives as persons who make judgments of value.  

Only a very remarkable, and disturbing, form of selective perception can explain why so many people find the cognitive dualism of the Pakistani Dr or Krista so peculiar and even offensive.  Their reaction suggests a widespread deficit in the form of civic education needed to equip people to  honor their duty as citizens of a liberal democracy (or as subjects in Kant's "Kingdom of Ends") to respect the choices that other free and reasoning individuals make about how to live.

Is it really surprising, then, that those who have committed themselves to "solving" the chimera of Krista's "nonacceptance problem" can't see the very real problem with a conception of science education that tries to change who people are rather than enlarge what they know?



Submerged ... 

But will surface in near future -- w/ results of new study ....

Prize for anyone who correctly predicts what it is about; 2 prizes if predict result.