follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


The trust-in-science *particularity thesis* ... a fragment

From something I'm working on . . . .

It is almost surely a mistake to think that highly divisive conflicts over science are attributable to general distrust of science or scientists.  Most Americans—regardless of their cultural identities—hold scientists in high regard, and don’t give a second’s thought to whether they should rely on what science knows when making important decisions.  The sorts of  disagreements we see over climate change and a small number of additional factual issues stem from considerations particular to those issues (National Research Council 2016). The most consequential of these considerations are toxic memes, which have transformed positions on these issues into badges of membership in and loyalty to competing cultural groups (Kahan et al 2017; Stanovich & West 2008).

We will call this position the “particularity thesis.”  We will distinguish it from competing accounts of how “attitudes toward science” relate to controversy on policy-relevant facts. We’ve already adverted to two related ones: the “public ambivalence” thesis, which posits a widespread public unease toward science or scientists; and the “right-wing anti-science” thesis, which asserts that distrust of science is a consequence of holding a conservative political orientation or like cultural disposition. . . .


Kahan, D.M., K.H. Jamieson, A. Landrum & K. Winneg, 2017. Culturally antagonistic memes and the Zika virus: an experimental test. Journal of Risk Research, 20(1), 1-40.

National Research Council 2016. Science Literacy: Concepts, Contexts and Consequences. A Report of the National Academies of Science, Engineering and Medicine. Washington DC: National Academies Press.

Stanovich, K. & R. West, 2008. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning, 14, 129-67.


Some more canned data on religiosity & science attitudes

As I mentioned, in putting together a show for the National Academy of Sciences, I took a look at the 2014 GSS data.  

Here's a bit more of what's in there:

Actually, the left-hand panel is based on GSS 2010 data. But I hadn't looked at that particular item before.

The right-hand panel is based on GSS 2008, 2010, 2012, & 2014.  It is an update of a data display I created before the 2014 data (the most recent that has been released by the GSS) were available.

If, as reasonable, you want  confirmation that the underlying scales I've constructed are reliabily measuring the disposition that we independently have good reason to associate with religiosity, here are how these survey respondents respond to the GSS's "evolution" item:

I still find it astonishing that there isn't a more meaningful difference in the attitudes of religious & non-religious respondents on the "science attitude" measures.  Guess I had a case of WEKS on this.  

But these data do reinforce my view that religion is not the enemy of the Liberal Republic of Science.

There are  much more serious destructive forces to worry about . . . .


Nice LRs! Communicating climate change "causation" 

The use of likelihood ratios here --" climate change made maximum temperatures like those seen in January and February at least 10 times more likely than a century ago"-- makes this pretty good #scicomm, in my view.

Climate-science communicators typically get tied in knots when they address the issue of whether a particular event was “caused” by global warming.  The most conspicuous, & conspicuously unenlightening, instance of this occcurred in the aftermath of Hurricaine Sandy.

Likelihood ratios (LRs) are a productive alternative to the entanglement of linguistics—because the former invite and enable critical judgment while the latter attempt to evade it.

Obviously, LRs are only as good as the models that generated them.

But if those models reflect the best available evidence, then a practical person or group can make informed decisions based on how LRs quantify the risk involved (Lempert et al. 2013).  That’s what is effaced by linguistic tests that purport to treat causation as binary rather than probabilistic (Anders & Rasmussen 2012; Dolaghan 2004).

LRs also spare communicators from coming off as confabulators when an independent-minded person asks “what does it mean to say indirect/proximately/systematically caused?”

The statement “this event was 10x more consistent with the hypothesis that mean global temperatures have increased by this amount rather than having remained constant” in relation to a specified period conveys exactly what the communicator means and in terms that ordinarily intelligent people can understand (Hansen et al. 2012). 

Or in any case, that is my hypothesis.  While science communicators are doing the best they can to enlighten people in real time, science-of-science –communication researchers can help by empirically assessing the methods they are using.


Dollaghan, C.A., 2004. Evidence-based practice in communication disorders: what do we know, and when do we know it? Journal of Communication Disorders, 37(5), 391-400.

Hansen, J., M. Sato & R. Ruedy, 2012. Perception of climate change. Proceedings of the National Academy of Sciences, 109(37), E2415-E23.

Lempert, R.J., D.G. Groves & J.R. Fischbach, 2013. Is it Ethical to Use a Single Probability Density Function?, Santa Monica, CA: RAND Corporation.

Nordgaard, A. & B. Rasmusson, 2012. The likelihood ratio as value of evidence—more than a question of numbers. Law, Probability and Risk, 11(4), 303-15.



Mistrust or motivated misperception of scientific consensus? Talk today at NAS

For today’s lecture at Nat’l Acad. of Sci.

We’ll see how far I can get in 30 mins... (slides here).


Only in the Liberal Republic of Science . . . religious individuals trust science more than organized religion!

So I popped open a can of data—General Social Survey 2014 (the latest available)—a couple of days ago in anticipation of the talk I’m doing on Wednesday & I found out something pretty cool.

The thing had to do with responses to the GSS’s “confidence in institutions” module.  The module, which now has been has been part of the Survey for over 40 years, asks respondents to indicate “how much confidence”—“hardly any,” “only some,” or “a great deal”—they have in the “people running” 13 institutions:

a. Banks and Financial Institutions

b. Major Companies

c. Organized Religion

d. Education

e. Executive Branch of the Federal Government

f. Organized Labor

g. Press

h. Medicine

i. TV

j. U.S. Supreme Court

k. Scientific Community

l. Congress

Over the life of the measure, ratings for nearly every one of these institutions has declined “with one exception” (Smith 2013). “The exception is . . . the Scientific Community,” in whom confidence “has varied little and shown no decline.”  So much for Americans’ “growing distrust” of science.

In fact, over that entire period, “the people running” the “Scientific community” have ranked second, initially to those “running” medicine, but in more recent years to the “people running” the “military.” One can see that in this graphic, which I generated with the 1972-2014 dataset:

But what about those supposedly “antiscience” groups like conservatives and religious folks?

Turns out that they have displayed a remarkably high and consistent degree of confidence in those “running” the “Scientific community,” too.  Across the life of the measure, they both have consistently ranked the “Scientific community” as second or (in the case of religious folks for one time interval) third in confidence-worthiness

Indeed, conservatives ranked the “people running” the “Scientific community” higher than the “people running” the “Executive branch” of the federal government during the presidency of Ronald Reagan.


Citizens who are above average in religiosity have consistently ranked the “people running” the “Scientific community”  ahead of the “people running” the “institution” of “Organized religion.”

So cheer up: there is no shortage of trust in and respect for science in our pluralistic liberal democracy.

Probably the only Americans who today don’t share this high regard for science are the “people" now "running” the “Executive branch.” 

They are the true “enemy of the people”--all of them-- in the Liberal Republic of Science


Smith, T.W. Trends in Public Attitudes About Confidence in Institutions (NORC, Chicago, IL, 2013).


Next week's talks

Will send postcards.



Here you go -- Science of Science Communication session 6 reading list


"Fake news"--enh. "Alternative Facts presidency"--watch out! (Talk summary & slides)

My remarks, rationally reconstructed, at the AAAS Panel on “Fake News and Social Media: Impacts on Science Communication and Education” (slides here).

1. Putting the bottom line on top.  If one is trying to assess the current health of science communication in our society, then he or she should likely regard the case of “fake news” as akin to a bad head cold.

The systematic propogation of false information that President Trump is engaged in, on the other hand, is a cancer on the body politic of enlightened self-government.

2. Conjectures inviting refutation. I’ll tell you why I see the “alternative facts presidency” as so much more serious than “fake news.” But before I continue, I want to issue a proviso: namely, that everything I think on these matters is in the nature of informed conjecture. 

I will be drawing on the dynamic of identity-protective reasoning to advance my claims (Flynn et al. 2017; Kahan 2010). Because we have learned so much about mass opinion from studies featuring this dynamic, it makes perfect sense to suspect this form of information processing will determine how people react to fake news and to the stream of falsehoods that flow continuously from the Trump administration.

But we should recognize that these phenomena are different from the ones that have supplied the focus for the study of identity-protective reasoning.

Other dynamics—including ones that also reflect well-established mechanisms of cognition—might support competing hypotheses.

Accordingly, it’s not appropriate to stand up in front of you and say “here is what social science tells us about fake news and presidential misinformation . . . .”  Social science hasn’t spoken yet. Unless he or she has data that directly address these phenomena, anyone who tells you that “social science says” this or that about “fake news” is engaged in story-telling, a practice that can itself mislead the public and distort scholarly inquiry.

I will, for purposes of exposition, speak with a tone of conviction.  But I’m willing to do that only because I can now be confident that you’ll understand my position to be a provisional one, reflecting how things look to me at the Bayesian periphery of a frontier that warrants (demands) empirical exploration. Once valid studies start to accumulate, I am prepared to pull up stakes and move in the direction they prescribe, should it turn out that the ground I’m standing on now is insecure.

3.  Models.  I’m going to use two simple models to guide my exposition.  I’ll call one the “passive aggregator theory” (PAT).  PAT envisions a credulous public that is pushed around by misinformation emanating from powerful economic and political interest groups.

That model, I will contend, is simply wrong.

The truth is something closer to the second model I want you to consider.  This one can be called the “motivated public theory” (MPT).  According to MPT, members of the public are unconsciously impelled to seek out information that supports the view of the identity-defining group they belong to and to dismiss as non-credible any information that challenges that position. 

Where the public is motivated to see things in an identity-reinforcing way, it will be very profitable to create misinformation that gives members of the public what they want—namely, corroboration that their group’s positions are right, and those of their benighted rival wrong.

In my view, that’s what the fake news we saw during the election was all about.  Some smart people in Macedonia or wherever set up sites with scandalous—in fact, outright incredible—headlines to direct traffic to websites that had agreed to pay them to do exactly that.  Indeed, every fake news story was ringed with classic click bait features on overcoming baldness, restoring wrinkled skin, curing erectile dysfunction, and the like.

On the MPT account, the only people who’d be enticed to read such material would be people already predisposed to believe (or maybe fantasize) that the subjects of the stories (Hillary Clinton and Donald Trump, for the most part) were evil or stupid enough to engage in the behavior the stories describe. The incremental effect of these stories in shaping their opinions would be nil.

Same for those predisposed not to believe the stories.  They’d be unlikely to see most of them because of the insularity of political-news networks in social media. But even if they saw them, they’d dismiss them out of hand as noncredible.

On net, no one’s view of the world would change in any meaningful way.

4. Empirics. Consider some data that makes a conjecture like this plausible.

a. In the study (Kahan et al., in press), ordinary members of the public were instructed to determine the results of an experiment by looking at a two-by-two contingency table.  The right way to interpret information presented in this form (a common one for presenting experimental research) is to look at the ratios of positive to negative impacts conditional on the treatment.  The subjects who did this would get the correct answer.

But most people don’t correctly interpret 2x2 contingency tables or alternative formulations that convey the same information. Instead the simply compare the number of positive and negative results in the cells for the treatment condition. Or if they are a little smarter, they do that and look at the number of positive results in both the treatment and the untreated control.

Anyone following that strategy would get the “wrong” answer.

The design also had an experimental component. Half the subjects were told that the 2x2 summarized results—better or worse complexions—for a new skin-rash treatment.  The other half that it reflected the results—violent crime up versus violent crime down—of a law that permitted citizens to carry concealed weapons in public.

In the skin-rash condition, the likelihood of getting the answer right turned only on the Numeracy (quantitative-rezoning proficiency) of the subjects, regardless of whether were right-leaning or left-.

click me for better look!But in the gun-control condition, high-numeracy subjects were likely to get the answer right only  when the data, properly interpreted, supported the position that was dominant in their ideological group. When the data, property interpreted supported their ideological rival’s position, the subjects highest in Numeracy were no more likely to get the answer correct than those who were low in Numeracy. Essentially they used their reasoning proficiencies to pry open a confabulatory escape hatch to the logic trap they found themselves trapped in.

As a result, the highest Numeracy subjects were the most divided on what the data signified.

This is a result consistent with MPT.  If it captures the way that people reason outside the lab, then we should expect to see not only that members of opposing affinity groups are polarized on contentious empirical issues. We should expect to see the degree of polarization between their members increasing in lockstep with diverse citizens’ science comprehension capacities.

And indeed, that is what we see (Kahan 2016).

b. Now consider the significance of this for fake news.  

From this simple model, we can see how identity-protective reasoning can profoundly divide opposing cultural groups.  Yet no one was being misled about the relevant information. Instead, the subjects were misleading themselves—to avoid the dissonance of reaching a conclusion contrary to their political identifies.  

Nor was the effect a result of credulity or any like weakness in critical reasoning. 

On the contrary, the very best reasoners—the ones best situated to make sense of the evidence—were the ones who displayed the strongest tendency toward identity-protective reasoning.

Because biased information-search is also a consequence of identity-protective cognition, we should expect that people who reason this way will be much more likely to encounter information that reinforces rather than undermines their predispositions.

Of course, people might now and again stumble across “fake news” that goes against their predispositions, too.  But because we know such people are already disposed to bend even non-misleading information into a shape that affirms rather than threatens their identities, there is little reason to expect them to credit “fake news” when the gist of it defies their political preconceptions.

These are inferences that support MPT over PAT.

5. As I stated the outset, we shouldn’t equate the Trump Administration’s persistent propagation of misinformation with the misinformation of the cartoonish “fake news” providers.  The latter, I’ve just explained, are likely to have only a small or no effect on the science communication environment; the former, however, fills that environment with toxins that enervate human reason.

Return to the “motivated public theory.” We shouldn’t be satisfied to treat a “motivated public” as exogenous. How do people become motivated, identity-protective reasoners?

They aren’t, after all, on myriad issues (e.g., GM foods) on which we could easily imagine conflict—indeed, on whether there actually is in other places (e.g., GM foods in Europe).

click me, pls!A likely answer, my collaborators and I concluded in a recently published study (Kahan et al. 2017), is the advent of culturally toxic memes.

Memes are self-propagating ideas or practices that enjoy wide circulation by virtue of their salience.

Culturally toxic memes are ones that fuse positions on risks or similar policy-relevant facts to individual identities. The operate primarily by stigmatizing those who hold such positions as stupid and evil.

When that happens, people gravitate toward habits of mind that reinforce their commitment to their groups’ positions. They do that because holding a position consistent with others in their groups is more important to them—more consequential for their well-being—than is holding a positon that is correct

What an ordinary member of the public thinks about climate change, e.g.,  will not affect the risk that it poses to her or to anyone she cares The impact she as an individual consumer or an individual voter will be too small to make any real difference.

But given what holding such a position has come to signify about who one is—whose side one is on in a vicious struggle between competing groups for cultural ascendency—forming a belief (an attitude, really) that estranges her from her peers could have devastating psychic and material consequences.

Of course, when everyone resorts to this form of reasoning simultaneously, we’re screwed.  Under these conditions, citizens of pluralistic democratic society will fail to converge, or converge as quickly as they should, on valid empirical evidence about the dangers they face and how to avert them (Kahan et al. 2012).

The study we conducted modeled how exposure to toxic memes (ones linking the spread of Zika to global warming or to illegal immigrants) could rapidly polarize cultural groups that are now largely in agreement about the dangers posed by the Zika virus.

This is why we should worry about Trump: his form of misinformation, combined with the office that he holds, makes him a toxic-meme propagator of unparalleled influence.

When Trump spews forth with lies, the media can’t simply ignore him, as they would a run-of-the-mill crank. What the President of the United States says always compels coverage.

Such coverage, in turn, impels those who want to defend the truth to attack Trump in order to try to undo the influence his lies could have on public opinion.

But because the ascendency of Trump is itself a symbol of the status of the cultural groups that propelled him to the White House, any attack on him for lying is likely to invest his position with the form of symbolic significance that generates identity-protective cognition: the fight communicates a social meaning—this is what our group believes, and that what our enemies believe—that drowns out the facts (Nyhan et al 2010, 2013).

We aren’t polarized today on the safety of universal childhood immunization (Kahan 2013; CCP 2014). But we could easily become so if Trump continues to lie about the connection between vaccinations and autism.

We aren’t polarized today on the means appropriate to counteract the threat of the Zika virus (Kahan et al. 2017).  But if Trump tries to leverage public fear of Zika into support for tightening immigration laws, we could become politically polarized—and cognitively impeded from recognizing the best scientific evidence on spread of this disease.

Trump is uniquely situated, and apparently emotionally or strategically driven, to enlarge the domain of issues on which this reason-effacing dynamic degrades our society’s capacity to recognize and give proper effect to decision-relevant science.

6.  Trump, in sum, is our nation’s science-communication environment polluter-in-chief. We shouldn’t let concern over “fake news” on Facebook to distract us from the threat he uniquely poses to enlightened self-government or from identifying the means by which the threat his style of political discourse can be repelled.


CCP, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Experimental Investigation (Jan. 27, 2014).

Flynn, D.J., Nyhan, B. & Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology 38, 127-150 (2017).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Culturally antagonistic memes and the Zika virus: an experimental test. J Risk Res 20, 1-40 (2017).

Kahan, D.M. The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It. in Emerging Trends in the Social and Behavioral Sciences (John Wiley & Sons, Inc., 2016).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Behavioural Public Policy  (in press).

Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).

Nyhan, B., Reifler, J. & Ubel, P.A. The Hazards of Correcting Myths About Health Care Reform. Medical Care 51, 127-132 110.1097/MLR.1090b1013e318279486b (2013).





Will send postcards


Politically biased information processing & the conjunction fallacy

So everyone probably is familiar with the “conjunction fallacy.”  It figures in Tversky & Kahneman’s famous “Linda  problem”:

 Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.

2. Linda is a bank teller and is active in the feminist movement.

According to T&K (1983), about 85% of people select 2. This is a mistake, in their view because “Linda is a bank teller” subsumes all the cases in which she is a bank teller and thus logically includes both the cases in which she is a “bank teller active in the feminist movement” and all the cases in which she is a “bank teller not active in the feminist movement.” On this reading, belonging to class 2 cannot logically be more probable than belong to class 1.

Nevertheless, people make the mistake because 2 is more concrete and conveys a picture that is more vivid than 1.  Those who over-rely on heuristic, “System 1” information processing are thus likely to seize on it as the “right answer.”  Individuals who score higher in conscious, effortful, “System 2” processing tend to be more likely to supply the correct answer (Toplak, West & Stanovich 2011).

What happens, though, when the individual actor featured in the problem behaves in a manner that evinces bad character, and the more vivid “choice 2” includes information that he possesses certain political outlooks?  People tend to attribute bad character to those who disagree with them politically. So will the likelihood of their picking choice 2 be higher if the actor’s political outlooks differ from their own?

We wanted to figure this out. So in our variant of the “Linda problem,” we informed our subjects, approximately 1200  ordinary people, that

Richard is 31 years old. On his way to work one day, he accidentally backed his car into a parked van. Because pedestrians were watching, he got out of his car. He pretended to write down his insurance information. He then tucked the blank note into the van’s window before getting back into his car and driving away.

Later the same day, Richard found a wallet on the sidewalk. Nobody was looking, so he took all of the money out of the wallet. He then threw the wallet in a trash can.

We then assigned them to one of three conditions:

“Which of these two possibilities do you think is more likely?

1. ex-felon condition

Which of these two possibilities do you think is more likely?

(a) Richard is self-employed ____

(b) Richard is self-employed and a convicted felon ___

2. procontrol.  

Which of these two possibilities do you think is more likely?

(a) Richard is self-employed ____

(b) Richard is self-employed and a very strong supporter of strict gun control laws? ___

3. anticontrol

Which of these two possibilities do you think is more likely?

(a) Richard is self-employed ____

(b) Richard is self-employed and a very strong opponent of strict gun control laws? ___

The motivation to test this proposition originated in a cool article by Will Gervais (et al. 2011), who found that when “Richard” is described as an atheist, people are more likely to display the “conjunction fallacy” than when he is described as an “atheist” or as a “rapist”; we adapted the “Richard” vignette from their study.

What did we find?

Well, first of all, the probability of the conjunction fallacy was highest, regardless of political outlooks, when Richard was described as a convicted felon.  Moreover, this bias grew in magnitude as subjects became more right-leaning in their politics.

But when Richard was described as either a "strong opponent"or a "strong supporter" of gun control laws, left-leaning subjects were slightly more likely to display a bias congenial to their political outlooks. Right-leaning ones displayed no meaningful bias in their appraisals. 

So there you go. Make of this what you will!


Gervais, W.M., Shariff, A.F. & Norenzayan, A. Do you believe in atheists? Distrust is central to anti-atheist prejudice. Journal of Personality and Social Psychology 101, 1189 (2011).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

Tversky, A. & Kahneman, D. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 90, 293-315 (1983).


Those of you waiting for Science of Science Communication Session 4 reading list & questions -- your wait is over!


To make real progress, the science of science communication must leave the lab (at least now and again)

Gave a talk last week at Pew Charitiable Trusts, which is keenly interested in how their various projects can benefit from evidence-based science communication.   Slides here.

Main points:

1. Group conflict over policy-relevant science is not due to limitations on individual rationality. Rather they reflect the consequence of a polluted science-communication environment, in which the entanglement of group identity in contested factual positions forces people to choose between being who they are and knowing what’s known by science.  In such an environment it is perfectly rational for an ordinary member of the public to choose the former: his or her personal actions cannot meaningfully contribute to mitigating (or aggravating) societal risks (e.g., climate change); yet because of what positions on such issues have come to signify about who one is and whose side one is on in acrimonious cultural status conflict,  he or she can pay a steep reputational cost for forming beliefs contrary to the ones that prevail in that person’s cultural group. 

Fixing the science communication environment requires communication strategies that dissolve the conflict between the two things people do with their reason -- be who they are culturally speaking, and know what is known by science.

2. The two-channel model of science communication is one strategy for disentangling identity and positions on societal risks.  According to the model, individuals process scientific information along both a content channel, where the issue is the apparent validity of the information, and a social-meaning channel, which address whether accepting such information is consistent with one’s identity. The CCP study reported in Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization, Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015), illustrates this point: after reading a news story that stressed the need for greater carbon emission limits, individuals culturally disposed to climate skepticism reacted closed-mindedly to evidence of climate change; those who first read a story on the call for greater research on geo-entering, in contrast, responded more open-mindedly to the same climate-change research. The difference can plausibly be linked to the stories’ impact in threatening and affirming the group identity, respectively, of those who are culturally disposed to climate skepticism. 

3. It’s time to get out of the lab and get into the field. The two-channel model of science communication is just that—a model of how science communication dynamics work.  It doesn’t by itself tell anyone exactly what he or she should do to promote better public engagement with controversial forms of decision-relevant science in particular circumstances.  To figure that out, social scientists, working with field communicators, must collaborate to determine through additional empirical study how positive results in the lab can be reproduced in the field.  

There are more plausible accounts of how to apply such study in real-world circumstances than can plausibly true—just as there was (and still are) more accounts of why public conflict over science exists in the first place.  Just as valid empirical testing was needed to extract the true mechanisms from the sea of merely plausible in the lab, so valid empirical testing is needed to extract the true accounts of how to make science communication work in the real world.

CCP’s local-government and science filmmaking initiatives are guided by that philosophy. The great work that is being done by Pew-supported scientists and science advocates deserves the same sort of evidence-based science communication support.


Science of Science Communication seminar: Session 3 reading list

Okay okay-- here it is!


America's "alternative facts" on climate change

Okay, I think I get this "alternative facts" business:

Panels (A) and (B) show what it looks like when culturally diverse citizens use their knowledge of facts to do the best they can on a test of their “climate science literacy.”

In contrast, panels (C) and (D) show what it looks like when diverse citizens use their knowledge of fact to be a competent member of a cultural tribe.

Sadly, politics puts the question—who are you, whose side are you on—posed by  (C) and (D)

Fixing that is the greatest challenge that confronts the Liberal Republic of Science.


Aren't you curious to see the published version of "Science Curiosity and Political Information Processing"?!

Here it is-- & it's free for all 14 billion subscribers to this blog!


WSMD? JA! Political outlooks & Ordinary Science Intelligence

This is approximately the 2,92nd episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

Tom De Herdt formed an interesting conjecture, which he posed as follows:

[I]t may well be possible that the increased polarisation (visible in the left-hand graph [from Science Curiosity & Political Inforamtion Processing]) is a result not so much of OSI [Ordinary Science Intelligence], but rather of a selection effect: as OSI increases, many people are convinced of higher risk and hence “switch” camp towards the liberal/democrat voters. Only the “stubborn” republicans remain and, by implication, the perceived risk by highly scientifically intelligent republicans decreases.

In other words: in the “high” OSI group, there would be much more democrats than republicans compared to the “low” OSI group?

It must be easy for you to prove this hypothesis wrong (or to confirm it) but i don’t seem to find these data very explicitly mentioned in your paper(s).

My response:

That's an interesting surmise; for sure it is worth considering whether this kind of endogeneity could be creeping in when one assess how ideological or cultural values influence risk perception.

But here I'd say that the evidence we have on hand makes it unlikely that the results you are curious (science curious, in fact) that "reflection flight" drives Republican to the Democratic party, thereby causing the Ordinary Science Intelligence (OSI) to become top heavy with left-leaning Americans.

Maybe first I should explain what you obviously know, which is why that possibility wouldn't show up in the figure you are looking at. The two graphs are comparing concern about climate change among left- and right-leaning subjects conditional on their having same OSI scores. So even if there were a disparity in the proportion of right-leaning who score high on OSI, the figures would look exactly the same.

But we can easily look & see if there is such a disparity lurking in the data. Here's what we'd see on relationship of OSI to partisanship:

As reflected in these probability density distributions, those on left & those on right don't differ to any meaningful degree in their OSI scores. The correlation between OSI and scores on the "Left_right" political disposition scale (which is formed by aggregating resposes to liberal-conservative & party-identification items) is - 0.06-- it's hard to get much closer to zero than that! (Indeed,people can look pretty foolishif they think a "statistically significant" difference that paltry matters).

Or at least that's how it looks to me.


Presentation jeopardy: here's the answer; what's the question?

It's obviously a problem if one's research strategy involves aimlessly collecting a shitload of data and then fitting a story to whatever one finds.

But for a presentation, it can be a fun change of pace to start with the data and then ask the audience what the research question was. I'll call this the "Research Presentation 'Jeaopardy Opening.' "

I tried this strategy at the the Society for Personality and Social Psychology meeting panel I was on on last Saturday. If I hadn't been on a 15-min clock -- if, say, the talk had been a longer one for a paper workshop or seminar -- I'd have actually called on members of the audience to offer and explain their guesses. Instead I went forward indicating what questions I, as the Alex Trubek of the proceedings, would count as "correct."

But there's no such constraint here on the CCP Blog.  So consider these slides & then tell me what question you think the data are the answer too! For my answers/questions, check out the entire slide show.

Slide 1:


Slide 2:


Slide 3:


Slide 4:


Slide 5:

Slide 6:



Science of Science Communication seminar: Session 2 reading list

Ready ... set ... go!



Synopses of upcoming talks

I usually don't post these until day before or of, but it occurs to me that that's wasting the opportunity to solicit feedback form the 14 billion subscribers to this site, who might well suggest something that improves my actual presentation.


For presentation this Saturday at the Society for Personality and Social Psychology meeting in San Antonio:

Cognitive Dualism and Science Comprehension

I will present evidence of cognitive dualism: the use of one set of information-processing strategies to form beliefs (e.g., in divine creation; the nonexistence of climate change) essential to a cultural identity and another to form alternative beliefs (in evolution; or climate change) esential to instrumental ends (medical practice; adaptation).

Then these at the American Association for the Advancment of Science in Boston on Feb. 17 & 18:

America's Two Climate Changes

There are two climate changes in America: the one people “believe” or “disbelieve” in order to express their cultural identities; and the one about which people acquire and use scientific knowledge in order to make decisions of consequence, individual and collective. I will present various forms of empirical evidence—including standardized science literacy tests, lab experiments, and real-world field studies in Southeast Florida—to support the “two climate changes” thesis.

Does "fake news" matter?

The advent of “fake news” disseminated by social media is a relatively novel phenomenon, the impact of which has not been extensively studied. Rather than purporting to give an authoritative account, then, I will describe two competing models that can be used to structure empirical investigation of the effect of “fake news” on public opinion.   The information aggregator account (IA) sees individuals’ beliefs as a register of the sum total of information sources to which they’ve been exposed.  The motivated processor account (MP), in contrast, treats individuals’ predispositions as driving both their search for information and the weight they assign any information they are exposed to. These theories generate different predictions about “fake news”: that it will significantly distort public opinion, in the view of IA;  or that it will be near irrelevant, in the view of MP.  In addition to discussing the provenance of these theories in the science of science communication, I will identify some of the key measurement challenges they pose for researchers and how those challenges can be surmounted.



What's on tap for spring semester? "Science of Science Communication" seminar!

First session, on HPV vaccine, is tomorrow.

I"ve posted exerpts from this "general information" document before, but having consulted the rulebook on blogs, I found there is no provision that bars repeating oneself (over & over & over, in fact).

I don't think I'll post summaries for every session this yr. Thanks to Tamar Wilner (e.g., here), that worked incredibly well the last time I taught this seminar.  But precisely b/c it did, the utility of a "virtual" companion for this yr's run strikes me as low.

Of course, if anyone wants to argue that I'm wrong, I could change my mind. Especially if they agree to be this yr's Tamar Wilner (Tamar Wilner is prohibited from doing so, in fact!)

 From the course "general information" document:

          1. Overview. The most effective way to communicate the nature of this course is to identify its motivation.  We live in a place and at a time in which we have ready access to information—scientific information—of unprecedented value to our individual and collective welfare. But the proportion of this information that is effectively used—by individuals and by society—is shockingly small. The evidence for this conclusion is reflected in the manifestly awful decisions people make, and outcomes they suffer as a result, in their personal health and financial planning. It is reflected too not only in the failure of governmental institutions to utilize the best available scientific evidence that bears on the safety, security, and prosperity of its members, but in the inability of citizens and their representatives even to agree on what that evidence is or what it signifies for the policy tradeoffs acting on it necessarily entails. 

            This course is about remedying this state of affairs. Its premise is that the effective transmission of consequential scientific knowledge to deliberating individuals and groups is itself a matter that admits of, and indeed demands, scientific study.  The use of empirical methods is necessary to generate an understanding of the social and psychological dynamics that govern how people (members of the public, but experts too) come to know what is known to science. Such methods are also necessary to comprehend the social and political dynamics that determine whether the best evidence we have on how to communicate science becomes integrated into how we do science and how we make decisions, individual and collective, that are or should be informed by science. 

            Likely you get this already: but this course is not simply about how scientists can avoid speaking in jargony language when addressing the public or how journalists can communicate technical matters in comprehensible ways without mangling the facts.  Those are only two of many science communication” problems, and as important as they are, they are likely not the ones in most urgent need of study (I myself think science journalists have their craft well in hand, but we’ll get to this in time).  Indeed, in addition to dispelling (assaulting) the fallacy that science communication is not a matter that requires its own science, this course will self-consciously attack the notion that the sort of scientific insight necessary to guide science communication is unitary, or uniform across contexts—as if the same techniques that might help a modestly numerate individual understand the probabilistic elements of a decision to undergo a risky medical procedure were exactly the same ones needed to dispel polarization over climate science! We will try to individuate the separate domains in which a science of science communication is needed, and take stock of what is known, and what isn’t but needs to be, in each. 

            The primary aim of the course comprises these matters; a secondary aim is to acquire a facility with the empirical methods on which the science of science communication depends.  You will not have to do empirical analyses of any particular sort in this class. But you will have to make sense of many kinds.  No matter what your primary area of study is—even if it is one that doesn’t involve empirical methods—you can do this.  If you don’t yet understand that, then perhaps that is the most important thing you will learn in the course. Accordingly, while we will not approach study of empirical methods in a methodical way, we will always engage critically the sorts of methods that are being used in the studies we examine, and I from time to time will supplement readings with more general ones relating to methods.  Mainly, though, I will try to enable you to see (by seeing yourself and others doing it) that apprehending the significance of empirical work depends on recognizing when and how inferences can be drawn from observation: if you know that, you can learn whatever more is necessary to appreciate how particular empirical methods contribute to insight; if you don’t know that, nothing you understand about methods will furnish you with reliable guidance (just watch how much foolishness empirical methods separated from reflective, grounded inference can involve).


Page 1 ... 4 5 6 7 8 ... 46 Next 20 Entries »