follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Thursday
Mar232017

Should I update my priors on partisanship & trust in industry vs. university scientists? By how much & in what direction?!

I'm still stuck in GSS can.  Actually, it's more like a bag of potato chips; you can't stop munching until you've emptied the thing.

But anyway, the 2006 GSS had an item that solicited respondents' attitudes toward "industry" vs. "university scientists." 

Well, "we all know" that conservatives hold university scientists in contempt for their effemenate, elitist ways & that liberals regard industry scientists as shills.

But here's what GSS says about partisanship & industry vs. university scientists . . . .

 WEKS strikes again?.. Or is this just more survey artifact?

Maybe this ...

is more informative?  Or will people w/ different priors just disagree about the practical significance of this difference in the probability of finding industry scientists less reliable than university ones?...

Wednesday
Mar222017

What can we *really* conclude from the GSS's 2010 item on the risk of GM/GE crops? An expert weighs in

Never fails! My posts from “yesterday”™ and “the day before yesterday”™ have lured a real, bona fide expert to come forward. The expert in this case is William Hallman, the Chair of the Department of Human Ecology and faculty member of the Department of Nutritional Sciences and of the Bloustein School of Planning and Public Policy at Rutgers University. He is also currently a Resident Scholar in the Science of Science Communication initiative at the Annenberg Public Policy Center.

William Hallman:

As you probably suspect, I am sympathetic to your argument that because so few Americans really know anything about them, asking people about the safety of GM crops is problematic in general.  So, starting with the premise that most Americans are unlikely to have a pre-formed opinion about the safety of GM crops before being asked to think about the issue in the survey, I think that we should assume that most of the answers given to the question are impressionistic, and likely influenced by the wording of the question itself. Which is:

 “Do you think that modifying the genes of certain crops is: “Extremely dangerous for the environment . . . Not dangerous at all for the environment.”

I agree with the idea suggested by @Joshua, that because the risk targeted is “danger to the environment,” it is plausible that the differences seen are because conservative Republicans may be less likely to endorse the idea that anything is dangerous for the environment.  If you were to ask about risks to human health, you might get a different pattern of responses.

But that’s not all.  The root of the question refers to crops. That is, to plants/agriculture, and not to food.  So, are conservative Republicans also less likely to view crops/agriculture a threat to the environment in general? My guess is ‘probably’, but I don’t have good data to back up that assertion.

But wait, there’s more. . .  The question doesn’t actually refer to GMO’s.  It asks whether modifying the genes of crops is dangerous.  I’m don’t know where the specific question falls in the overall line of questioning.  Were there questions about GMOs preceding this?  If not, participants may not have grasped that the question was really about Genetic Engineering.  Technically, you can “modify the genes of certain crops” through standard crossbreeding/hybridization methods. It is, in part, why the FDA has never liked the broad term “Genetic Modification.”  If the question had asked, “Do you think the genetic engineering of crops is dangerous for the environment,” I think you would get a different pattern of responses.  As a side note, I have ancient data that shows that more than a decade ago, Americans were as likely to approve of foods produced through crossbreeding as they were for foods produced through genetic engineering.  

Sunday
Mar192017

More GM food risk data to chew on--compliments of GSS

Okay, then. 

Here are some simple data analyses that reflect how a wider range of GSS science-attitude variables relate to perceptions that GM crops harm the environment, and how that relationship is affected by partisanship.

I’d say they tell basically the same story as my initial analysis of CONSCI, the item that measures “confidence” in “those running” the “scientific community”: basically, that higher, pro-science scores on these measures is associated with less concern with GM crops. This is so particularly among right-leaning respondents; indeed, left-leaning ones don't really move at all when one looks at risk perceptions in relation to the composite "proscience" scale.

There is also a small zero-order correlation (r (1189) -0.12, p < 0.01) between GENEGEN—the GSS’s 2010 GM risk perception item—and the composite left-right scale that I constructed and that is coded so that higher scores denote greater conervatism.

All of this is out of keeping with the usual finding of a lack of partisan influence on GM food risks. I have reported many times that there is no partisan effect when GM food risks are measured with the Industrial Strength Risk Perception measure.  Surveys conducted by other opinion analysts using different measures have shown the same thing.

So what’s going on?

One possibility, suggested by loyal listener @Joshua, is that the GSS’s GM-concern item looks at people’s anxiety about the impact of GM crops on “the environment” as opposed to the safety of consuming GM foods.  The “environmental risk” cue is enough information for the public—which is otherwise pretty clueless (“cueless”?) about GM risks—to recognize how the issue ought to cohere with their political outlooks.

Seems persuasive to me . . . but what do you—the 14 billion daily readers of this blog—think?!

Oh, one more thing: I did a quick search and found only one paper that addresses partisanship and the GSS’s “GENEGEN” item. If others know of additional ones, please let me & all the readers know.

Oh, one more "one more" thing. Here are the raw data:

 

Friday
Mar172017

More GSS "science attitudes" measures & their effect on perception of GM crop risks

So on reflection, what I posted here now seems less cogent than I had thought.  I'm sending it back to the shop so that it can be replaced with something more enlightening on the various additional (that is, in addition to "CONSCI") GSS "science attitude"  items and concern over GM crop risks.

Thursday
Mar162017

Trust in science & perceptions of GM food risks -- does the GSS have something to say on this?

So here’s something fun.

I found it while scraping the bottom of the barrel of a can of GSS data that I had consulted to prepare remarks on the role of “trust in science/of scientists” that I gave at a National Academy of Sciences event a couple of weeks ago.

The GSS has a variety of measures that could be construed as measuring “trust” of this sort. The most famous one is its “Confidence in Institutions” query. It solicits how much “confidence” respondents have in  “those running” various  institutions, including the “science community.” The item permits one of three responses: “hardly any,” “only some,” and “a great deal.”

The wording is kind of odd, but the item is a classic, having been included in every GSS study conducted in 1974.  One can find dozens of studies that use it for one thing or another, including proofs of partisan differences in trust of science.

Well, it turns out that in 2010, the GSS asked this question:

Do you think that modifying the genes of certain crops is:

1. Extremely dangerous for the environment

2. Very dangerous

3. Somewhat dangerous

4. Not very dangerous

5. Not dangerous at all for the environment.

So I decided to see what would happen when one uses trust in since, as measured by the institutional confidence item, to predict responses to the genetically modified crops item. I also included a political orientation measure formed by aggregating responses to the GSS’s 7-point liberal-conservative ideology item and its 7-point party orientation item.

In my analysis, I measured the probability  that a respondent would select a response from 1-3—the ones that evince a perception of danger.

Here’s what I found:

 

Interesting!

I hadn’t expected partisan identity to matter at all, given that surveys now typically find no meaningful correlation between attitudes toward GM foods and party identity. You can see, though, that there is a bit of a partisan effect here, with those are right-leaning ideologically inclined to find less danger in GM crops as their Always eat your raw data before trying to draw inferences from a regression model. Click it!“confidence” in the “scientific community” increases.  For a "conservative Republican," the estimated difference in the probability of finding GM crops to be environmentally dangerous at the "great deal" vs. "hardly any" response levels is -18% (+/- 15% at 0.95 LOC).

Left-leaning respondents, in contrast, don't budge a centimeter as their science-community “confidence” increases (-3%, +/- 12%). 

What should we make of this, if anything? 

I’m not sure, actually.  I still am inclined to see responses to GM food questions as meaningless—a survey artifact—given how few people are actually aware of what GM foods are. Obviously, here, the level of concern expressed is way out of line with people’s behavior in consuming prodigious amounts of GM food.

We also don’t have any decent validation of the “confidence in science” measure: I’ve never encountered it being used to explain other attitudes in a way that would give one confidence that it really measures trust in science. The same goes, moreover, for all the other “trust” measures in the GSS, which consistently find high levels of trust in science among politically diverse citizens.

But maybe this finding should nudge me in the other direction?

You  tell me what you think & maybe I’ll revise my view!

 

 

Wednesday
Mar152017

Check it out-- a matchmaking sight for scholarly collaborators!

This is pretty cool.  I'm going to go out on a limb & predict it will eventually be bought out by one of the on-line dating services, which will then offer one-stop shopping for scholars looking for professional & personal. matches.

Monday
Mar132017

Potential Zika polarization in pictures

Maybe you can get the gist of the experiment in pictures?  If not, you can always read the (open-source) paper (Kahan, D.M., Jamieson, K.H., Landrum, A. & Winneg, K., Culturally antagonistic memes and the Zika virus: an experimental test, J. Risk Res. 20, 1-40 (2017)).

A model

Toxic memes ...


Affective impacts ...

Information-processing degradation


 

 

Wednesday
Mar082017

The trust-in-science *particularity thesis* ... a fragment

From something I'm working on . . . .



It is almost surely a mistake to think that highly divisive conflicts over science are attributable to general distrust of science or scientists.  Most Americans—regardless of their cultural identities—hold scientists in high regard, and don’t give a second’s thought to whether they should rely on what science knows when making important decisions.  The sorts of  disagreements we see over climate change and a small number of additional factual issues stem from considerations particular to those issues (National Research Council 2016). The most consequential of these considerations are toxic memes, which have transformed positions on these issues into badges of membership in and loyalty to competing cultural groups (Kahan et al 2017; Stanovich & West 2008).

We will call this position the “particularity thesis.”  We will distinguish it from competing accounts of how “attitudes toward science” relate to controversy on policy-relevant facts. We’ve already adverted to two related ones: the “public ambivalence” thesis, which posits a widespread public unease toward science or scientists; and the “right-wing anti-science” thesis, which asserts that distrust of science is a consequence of holding a conservative political orientation or like cultural disposition. . . .

Refs

Kahan, D.M., K.H. Jamieson, A. Landrum & K. Winneg, 2017. Culturally antagonistic memes and the Zika virus: an experimental test. Journal of Risk Research, 20(1), 1-40.

National Research Council 2016. Science Literacy: Concepts, Contexts and Consequences. A Report of the National Academies of Science, Engineering and Medicine. Washington DC: National Academies Press.

Stanovich, K. & R. West, 2008. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning, 14, 129-67.

Monday
Mar062017

Some more canned data on religiosity & science attitudes

As I mentioned, in putting together a show for the National Academy of Sciences, I took a look at the 2014 GSS data.  

Here's a bit more of what's in there:

Actually, the left-hand panel is based on GSS 2010 data. But I hadn't looked at that particular item before.

The right-hand panel is based on GSS 2008, 2010, 2012, & 2014.  It is an update of a data display I created before the 2014 data (the most recent that has been released by the GSS) were available.

If, as reasonable, you want  confirmation that the underlying scales I've constructed are reliabily measuring the disposition that we independently have good reason to associate with religiosity, here are how these survey respondents respond to the GSS's "evolution" item:

I still find it astonishing that there isn't a more meaningful difference in the attitudes of religious & non-religious respondents on the "science attitude" measures.  Guess I had a case of WEKS on this.  

But these data do reinforce my view that religion is not the enemy of the Liberal Republic of Science.

There are  much more serious destructive forces to worry about . . . .

Friday
Mar032017

Nice LRs! Communicating climate change "causation" 

The use of likelihood ratios here --" climate change made maximum temperatures like those seen in January and February at least 10 times more likely than a century ago"-- makes this pretty good #scicomm, in my view.

Climate-science communicators typically get tied in knots when they address the issue of whether a particular event was “caused” by global warming.  The most conspicuous, & conspicuously unenlightening, instance of this occcurred in the aftermath of Hurricaine Sandy.

Likelihood ratios (LRs) are a productive alternative to the entanglement of linguistics—because the former invite and enable critical judgment while the latter attempt to evade it.

Obviously, LRs are only as good as the models that generated them.

But if those models reflect the best available evidence, then a practical person or group can make informed decisions based on how LRs quantify the risk involved (Lempert et al. 2013).  That’s what is effaced by linguistic tests that purport to treat causation as binary rather than probabilistic (Anders & Rasmussen 2012; Dolaghan 2004).

LRs also spare communicators from coming off as confabulators when an independent-minded person asks “what does it mean to say indirect/proximately/systematically caused?”

The statement “this event was 10x more consistent with the hypothesis that mean global temperatures have increased by this amount rather than having remained constant” in relation to a specified period conveys exactly what the communicator means and in terms that ordinarily intelligent people can understand (Hansen et al. 2012). 

Or in any case, that is my hypothesis.  While science communicators are doing the best they can to enlighten people in real time, science-of-science –communication researchers can help by empirically assessing the methods they are using.

REfs

Dollaghan, C.A., 2004. Evidence-based practice in communication disorders: what do we know, and when do we know it? Journal of Communication Disorders, 37(5), 391-400.

Hansen, J., M. Sato & R. Ruedy, 2012. Perception of climate change. Proceedings of the National Academy of Sciences, 109(37), E2415-E23.

Lempert, R.J., D.G. Groves & J.R. Fischbach, 2013. Is it Ethical to Use a Single Probability Density Function?, Santa Monica, CA: RAND Corporation.

Nordgaard, A. & B. Rasmusson, 2012. The likelihood ratio as value of evidence—more than a question of numbers. Law, Probability and Risk, 11(4), 303-15.

 

Wednesday
Mar012017

Mistrust or motivated misperception of scientific consensus? Talk today at NAS

For today’s lecture at Nat’l Acad. of Sci.

We’ll see how far I can get in 30 mins... (slides here).


Tuesday
Feb282017

Only in the Liberal Republic of Science . . . religious individuals trust science more than organized religion!

So I popped open a can of data—General Social Survey 2014 (the latest available)—a couple of days ago in anticipation of the talk I’m doing on Wednesday & I found out something pretty cool.

The thing had to do with responses to the GSS’s “confidence in institutions” module.  The module, which now has been has been part of the Survey for over 40 years, asks respondents to indicate “how much confidence”—“hardly any,” “only some,” or “a great deal”—they have in the “people running” 13 institutions:

a. Banks and Financial Institutions

b. Major Companies

c. Organized Religion

d. Education

e. Executive Branch of the Federal Government

f. Organized Labor

g. Press

h. Medicine

i. TV

j. U.S. Supreme Court

k. Scientific Community

l. Congress

Over the life of the measure, ratings for nearly every one of these institutions has declined “with one exception” (Smith 2013). “The exception is . . . the Scientific Community,” in whom confidence “has varied little and shown no decline.”  So much for Americans’ “growing distrust” of science.

In fact, over that entire period, “the people running” the “Scientific community” have ranked second, initially to those “running” medicine, but in more recent years to the “people running” the “military.” One can see that in this graphic, which I generated with the 1972-2014 dataset:

But what about those supposedly “antiscience” groups like conservatives and religious folks?

Turns out that they have displayed a remarkably high and consistent degree of confidence in those “running” the “Scientific community,” too.  Across the life of the measure, they both have consistently ranked the “Scientific community” as second or (in the case of religious folks for one time interval) third in confidence-worthiness

Indeed, conservatives ranked the “people running” the “Scientific community” higher than the “people running” the “Executive branch” of the federal government during the presidency of Ronald Reagan.

 

Citizens who are above average in religiosity have consistently ranked the “people running” the “Scientific community”  ahead of the “people running” the “institution” of “Organized religion.”

So cheer up: there is no shortage of trust in and respect for science in our pluralistic liberal democracy.

Probably the only Americans who today don’t share this high regard for science are the “people" now "running” the “Executive branch.” 

They are the true “enemy of the people”--all of them-- in the Liberal Republic of Science

Reference

Smith, T.W. Trends in Public Attitudes About Confidence in Institutions (NORC, Chicago, IL, 2013).

Thursday
Feb232017

Next week's talks

Will send postcards.

 

Wednesday
Feb222017

Here you go -- Science of Science Communication session 6 reading list

Monday
Feb202017

"Fake news"--enh. "Alternative Facts presidency"--watch out! (Talk summary & slides)

My remarks, rationally reconstructed, at the AAAS Panel on “Fake News and Social Media: Impacts on Science Communication and Education” (slides here).

1. Putting the bottom line on top.  If one is trying to assess the current health of science communication in our society, then he or she should likely regard the case of “fake news” as akin to a bad head cold.

The systematic propogation of false information that President Trump is engaged in, on the other hand, is a cancer on the body politic of enlightened self-government.

2. Conjectures inviting refutation. I’ll tell you why I see the “alternative facts presidency” as so much more serious than “fake news.” But before I continue, I want to issue a proviso: namely, that everything I think on these matters is in the nature of informed conjecture. 

I will be drawing on the dynamic of identity-protective reasoning to advance my claims (Flynn et al. 2017; Kahan 2010). Because we have learned so much about mass opinion from studies featuring this dynamic, it makes perfect sense to suspect this form of information processing will determine how people react to fake news and to the stream of falsehoods that flow continuously from the Trump administration.

But we should recognize that these phenomena are different from the ones that have supplied the focus for the study of identity-protective reasoning.

Other dynamics—including ones that also reflect well-established mechanisms of cognition—might support competing hypotheses.

Accordingly, it’s not appropriate to stand up in front of you and say “here is what social science tells us about fake news and presidential misinformation . . . .”  Social science hasn’t spoken yet. Unless he or she has data that directly address these phenomena, anyone who tells you that “social science says” this or that about “fake news” is engaged in story-telling, a practice that can itself mislead the public and distort scholarly inquiry.

I will, for purposes of exposition, speak with a tone of conviction.  But I’m willing to do that only because I can now be confident that you’ll understand my position to be a provisional one, reflecting how things look to me at the Bayesian periphery of a frontier that warrants (demands) empirical exploration. Once valid studies start to accumulate, I am prepared to pull up stakes and move in the direction they prescribe, should it turn out that the ground I’m standing on now is insecure.

3.  Models.  I’m going to use two simple models to guide my exposition.  I’ll call one the “passive aggregator theory” (PAT).  PAT envisions a credulous public that is pushed around by misinformation emanating from powerful economic and political interest groups.

That model, I will contend, is simply wrong.

The truth is something closer to the second model I want you to consider.  This one can be called the “motivated public theory” (MPT).  According to MPT, members of the public are unconsciously impelled to seek out information that supports the view of the identity-defining group they belong to and to dismiss as non-credible any information that challenges that position. 

Where the public is motivated to see things in an identity-reinforcing way, it will be very profitable to create misinformation that gives members of the public what they want—namely, corroboration that their group’s positions are right, and those of their benighted rival wrong.

In my view, that’s what the fake news we saw during the election was all about.  Some smart people in Macedonia or wherever set up sites with scandalous—in fact, outright incredible—headlines to direct traffic to websites that had agreed to pay them to do exactly that.  Indeed, every fake news story was ringed with classic click bait features on overcoming baldness, restoring wrinkled skin, curing erectile dysfunction, and the like.

On the MPT account, the only people who’d be enticed to read such material would be people already predisposed to believe (or maybe fantasize) that the subjects of the stories (Hillary Clinton and Donald Trump, for the most part) were evil or stupid enough to engage in the behavior the stories describe. The incremental effect of these stories in shaping their opinions would be nil.

Same for those predisposed not to believe the stories.  They’d be unlikely to see most of them because of the insularity of political-news networks in social media. But even if they saw them, they’d dismiss them out of hand as noncredible.

On net, no one’s view of the world would change in any meaningful way.

4. Empirics. Consider some data that makes a conjecture like this plausible.

a. In the study (Kahan et al., in press), ordinary members of the public were instructed to determine the results of an experiment by looking at a two-by-two contingency table.  The right way to interpret information presented in this form (a common one for presenting experimental research) is to look at the ratios of positive to negative impacts conditional on the treatment.  The subjects who did this would get the correct answer.

But most people don’t correctly interpret 2x2 contingency tables or alternative formulations that convey the same information. Instead the simply compare the number of positive and negative results in the cells for the treatment condition. Or if they are a little smarter, they do that and look at the number of positive results in both the treatment and the untreated control.

Anyone following that strategy would get the “wrong” answer.

The design also had an experimental component. Half the subjects were told that the 2x2 summarized results—better or worse complexions—for a new skin-rash treatment.  The other half that it reflected the results—violent crime up versus violent crime down—of a law that permitted citizens to carry concealed weapons in public.

In the skin-rash condition, the likelihood of getting the answer right turned only on the Numeracy (quantitative-rezoning proficiency) of the subjects, regardless of whether were right-leaning or left-.

click me for better look!But in the gun-control condition, high-numeracy subjects were likely to get the answer right only  when the data, properly interpreted, supported the position that was dominant in their ideological group. When the data, property interpreted supported their ideological rival’s position, the subjects highest in Numeracy were no more likely to get the answer correct than those who were low in Numeracy. Essentially they used their reasoning proficiencies to pry open a confabulatory escape hatch to the logic trap they found themselves trapped in.

As a result, the highest Numeracy subjects were the most divided on what the data signified.

This is a result consistent with MPT.  If it captures the way that people reason outside the lab, then we should expect to see not only that members of opposing affinity groups are polarized on contentious empirical issues. We should expect to see the degree of polarization between their members increasing in lockstep with diverse citizens’ science comprehension capacities.

And indeed, that is what we see (Kahan 2016).

b. Now consider the significance of this for fake news.  

From this simple model, we can see how identity-protective reasoning can profoundly divide opposing cultural groups.  Yet no one was being misled about the relevant information. Instead, the subjects were misleading themselves—to avoid the dissonance of reaching a conclusion contrary to their political identifies.  

Nor was the effect a result of credulity or any like weakness in critical reasoning. 

On the contrary, the very best reasoners—the ones best situated to make sense of the evidence—were the ones who displayed the strongest tendency toward identity-protective reasoning.

Because biased information-search is also a consequence of identity-protective cognition, we should expect that people who reason this way will be much more likely to encounter information that reinforces rather than undermines their predispositions.

Of course, people might now and again stumble across “fake news” that goes against their predispositions, too.  But because we know such people are already disposed to bend even non-misleading information into a shape that affirms rather than threatens their identities, there is little reason to expect them to credit “fake news” when the gist of it defies their political preconceptions.

These are inferences that support MPT over PAT.

5. As I stated the outset, we shouldn’t equate the Trump Administration’s persistent propagation of misinformation with the misinformation of the cartoonish “fake news” providers.  The latter, I’ve just explained, are likely to have only a small or no effect on the science communication environment; the former, however, fills that environment with toxins that enervate human reason.

Return to the “motivated public theory.” We shouldn’t be satisfied to treat a “motivated public” as exogenous. How do people become motivated, identity-protective reasoners?

They aren’t, after all, on myriad issues (e.g., GM foods) on which we could easily imagine conflict—indeed, on whether there actually is in other places (e.g., GM foods in Europe).

click me, pls!A likely answer, my collaborators and I concluded in a recently published study (Kahan et al. 2017), is the advent of culturally toxic memes.

Memes are self-propagating ideas or practices that enjoy wide circulation by virtue of their salience.

Culturally toxic memes are ones that fuse positions on risks or similar policy-relevant facts to individual identities. The operate primarily by stigmatizing those who hold such positions as stupid and evil.

When that happens, people gravitate toward habits of mind that reinforce their commitment to their groups’ positions. They do that because holding a position consistent with others in their groups is more important to them—more consequential for their well-being—than is holding a positon that is correct

What an ordinary member of the public thinks about climate change, e.g.,  will not affect the risk that it poses to her or to anyone she cares The impact she as an individual consumer or an individual voter will be too small to make any real difference.

But given what holding such a position has come to signify about who one is—whose side one is on in a vicious struggle between competing groups for cultural ascendency—forming a belief (an attitude, really) that estranges her from her peers could have devastating psychic and material consequences.

Of course, when everyone resorts to this form of reasoning simultaneously, we’re screwed.  Under these conditions, citizens of pluralistic democratic society will fail to converge, or converge as quickly as they should, on valid empirical evidence about the dangers they face and how to avert them (Kahan et al. 2012).

The study we conducted modeled how exposure to toxic memes (ones linking the spread of Zika to global warming or to illegal immigrants) could rapidly polarize cultural groups that are now largely in agreement about the dangers posed by the Zika virus.

This is why we should worry about Trump: his form of misinformation, combined with the office that he holds, makes him a toxic-meme propagator of unparalleled influence.

When Trump spews forth with lies, the media can’t simply ignore him, as they would a run-of-the-mill crank. What the President of the United States says always compels coverage.

Such coverage, in turn, impels those who want to defend the truth to attack Trump in order to try to undo the influence his lies could have on public opinion.

But because the ascendency of Trump is itself a symbol of the status of the cultural groups that propelled him to the White House, any attack on him for lying is likely to invest his position with the form of symbolic significance that generates identity-protective cognition: the fight communicates a social meaning—this is what our group believes, and that what our enemies believe—that drowns out the facts (Nyhan et al 2010, 2013).

We aren’t polarized today on the safety of universal childhood immunization (Kahan 2013; CCP 2014). But we could easily become so if Trump continues to lie about the connection between vaccinations and autism.

We aren’t polarized today on the means appropriate to counteract the threat of the Zika virus (Kahan et al. 2017).  But if Trump tries to leverage public fear of Zika into support for tightening immigration laws, we could become politically polarized—and cognitively impeded from recognizing the best scientific evidence on spread of this disease.

Trump is uniquely situated, and apparently emotionally or strategically driven, to enlarge the domain of issues on which this reason-effacing dynamic degrades our society’s capacity to recognize and give proper effect to decision-relevant science.

6.  Trump, in sum, is our nation’s science-communication environment polluter-in-chief. We shouldn’t let concern over “fake news” on Facebook to distract us from the threat he uniquely poses to enlightened self-government or from identifying the means by which the threat his style of political discourse can be repelled.

Refs

CCP, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Experimental Investigation (Jan. 27, 2014).

Flynn, D.J., Nyhan, B. & Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology 38, 127-150 (2017).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Culturally antagonistic memes and the Zika virus: an experimental test. J Risk Res 20, 1-40 (2017).

Kahan, D.M. The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It. in Emerging Trends in the Social and Behavioral Sciences (John Wiley & Sons, Inc., 2016).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Behavioural Public Policy  (in press).

Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).

Nyhan, B., Reifler, J. & Ubel, P.A. The Hazards of Correcting Myths About Health Care Reform. Medical Care 51, 127-132 110.1097/MLR.1090b1013e318279486b (2013).

.

 

Friday
Feb172017

Roadtrip

Will send postcards

Thursday
Feb162017

Politically biased information processing & the conjunction fallacy

So everyone probably is familiar with the “conjunction fallacy.”  It figures in Tversky & Kahneman’s famous “Linda  problem”:

 Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.

2. Linda is a bank teller and is active in the feminist movement.

According to T&K (1983), about 85% of people select 2. This is a mistake, in their view because “Linda is a bank teller” subsumes all the cases in which she is a bank teller and thus logically includes both the cases in which she is a “bank teller active in the feminist movement” and all the cases in which she is a “bank teller not active in the feminist movement.” On this reading, belonging to class 2 cannot logically be more probable than belong to class 1.

Nevertheless, people make the mistake because 2 is more concrete and conveys a picture that is more vivid than 1.  Those who over-rely on heuristic, “System 1” information processing are thus likely to seize on it as the “right answer.”  Individuals who score higher in conscious, effortful, “System 2” processing tend to be more likely to supply the correct answer (Toplak, West & Stanovich 2011).

What happens, though, when the individual actor featured in the problem behaves in a manner that evinces bad character, and the more vivid “choice 2” includes information that he possesses certain political outlooks?  People tend to attribute bad character to those who disagree with them politically. So will the likelihood of their picking choice 2 be higher if the actor’s political outlooks differ from their own?

We wanted to figure this out. So in our variant of the “Linda problem,” we informed our subjects, approximately 1200  ordinary people, that

Richard is 31 years old. On his way to work one day, he accidentally backed his car into a parked van. Because pedestrians were watching, he got out of his car. He pretended to write down his insurance information. He then tucked the blank note into the van’s window before getting back into his car and driving away.

Later the same day, Richard found a wallet on the sidewalk. Nobody was looking, so he took all of the money out of the wallet. He then threw the wallet in a trash can.

We then assigned them to one of three conditions:

“Which of these two possibilities do you think is more likely?

1. ex-felon condition

Which of these two possibilities do you think is more likely?

(a) Richard is self-employed ____

(b) Richard is self-employed and a convicted felon ___

2. procontrol.  

Which of these two possibilities do you think is more likely?

(a) Richard is self-employed ____

(b) Richard is self-employed and a very strong supporter of strict gun control laws? ___

3. anticontrol

Which of these two possibilities do you think is more likely?

(a) Richard is self-employed ____

(b) Richard is self-employed and a very strong opponent of strict gun control laws? ___

The motivation to test this proposition originated in a cool article by Will Gervais (et al. 2011), who found that when “Richard” is described as an atheist, people are more likely to display the “conjunction fallacy” than when he is described as an “atheist” or as a “rapist”; we adapted the “Richard” vignette from their study.

What did we find?

Well, first of all, the probability of the conjunction fallacy was highest, regardless of political outlooks, when Richard was described as a convicted felon.  Moreover, this bias grew in magnitude as subjects became more right-leaning in their politics.

But when Richard was described as either a "strong opponent"or a "strong supporter" of gun control laws, left-leaning subjects were slightly more likely to display a bias congenial to their political outlooks. Right-leaning ones displayed no meaningful bias in their appraisals. 

So there you go. Make of this what you will!

References

Gervais, W.M., Shariff, A.F. & Norenzayan, A. Do you believe in atheists? Distrust is central to anti-atheist prejudice. Journal of Personality and Social Psychology 101, 1189 (2011).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

Tversky, A. & Kahneman, D. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 90, 293-315 (1983).

Wednesday
Feb152017

Those of you waiting for Science of Science Communication Session 4 reading list & questions -- your wait is over!

Tuesday
Feb142017

To make real progress, the science of science communication must leave the lab (at least now and again)

Gave a talk last week at Pew Charitiable Trusts, which is keenly interested in how their various projects can benefit from evidence-based science communication.   Slides here.

Main points:

1. Group conflict over policy-relevant science is not due to limitations on individual rationality. Rather they reflect the consequence of a polluted science-communication environment, in which the entanglement of group identity in contested factual positions forces people to choose between being who they are and knowing what’s known by science.  In such an environment it is perfectly rational for an ordinary member of the public to choose the former: his or her personal actions cannot meaningfully contribute to mitigating (or aggravating) societal risks (e.g., climate change); yet because of what positions on such issues have come to signify about who one is and whose side one is on in acrimonious cultural status conflict,  he or she can pay a steep reputational cost for forming beliefs contrary to the ones that prevail in that person’s cultural group. 

Fixing the science communication environment requires communication strategies that dissolve the conflict between the two things people do with their reason -- be who they are culturally speaking, and know what is known by science.

2. The two-channel model of science communication is one strategy for disentangling identity and positions on societal risks.  According to the model, individuals process scientific information along both a content channel, where the issue is the apparent validity of the information, and a social-meaning channel, which address whether accepting such information is consistent with one’s identity. The CCP study reported in Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization, Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015), illustrates this point: after reading a news story that stressed the need for greater carbon emission limits, individuals culturally disposed to climate skepticism reacted closed-mindedly to evidence of climate change; those who first read a story on the call for greater research on geo-entering, in contrast, responded more open-mindedly to the same climate-change research. The difference can plausibly be linked to the stories’ impact in threatening and affirming the group identity, respectively, of those who are culturally disposed to climate skepticism. 

3. It’s time to get out of the lab and get into the field. The two-channel model of science communication is just that—a model of how science communication dynamics work.  It doesn’t by itself tell anyone exactly what he or she should do to promote better public engagement with controversial forms of decision-relevant science in particular circumstances.  To figure that out, social scientists, working with field communicators, must collaborate to determine through additional empirical study how positive results in the lab can be reproduced in the field.  

There are more plausible accounts of how to apply such study in real-world circumstances than can plausibly true—just as there was (and still are) more accounts of why public conflict over science exists in the first place.  Just as valid empirical testing was needed to extract the true mechanisms from the sea of merely plausible in the lab, so valid empirical testing is needed to extract the true accounts of how to make science communication work in the real world.

CCP’s local-government and science filmmaking initiatives are guided by that philosophy. The great work that is being done by Pew-supported scientists and science advocates deserves the same sort of evidence-based science communication support.

Tuesday
Feb072017

Science of Science Communication seminar: Session 3 reading list

Okay okay-- here it is!