follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Sunday
Jun182017

Weekend update: bias & unreliability in peer review

Here's a classic on biases of peer reviewers:

The experiment results are pretty cool. But one conclusion I view as even more alarming than the reviewers' confirmation bias was Mahoney's finding that there's very little "inter-rater reliability" among peer reviewers, something that I think people who regularly review for & publish in decision-science journals get to observe first hand. Another instance of the self-measurement paradox, this finding ought to make those in the business pretty nervous about the power of pre-publication peer review to winnow the wheat from the chaff.

One thing  I hadn't had occasion to notice or think about previously is the citation history:

 

Two possibilities occur to me.  One is that electronic search has made it easier for authors to find this paper.  

The other is that the topic of confirmation bias among reviewers has become much more topical as challenges to the integrity & effectiveness of pre-publication peer review have become more common.  

One way to sort those out would be to see if articles of other sorts--ones unrelated to cognitive bias of scientists as reviewers -- display this same pattern, in which case possibility one would seem the stronger explanation. One could also look to see if other older articles on reviewer confirmatory bias (e.g., Koehler 1993) have recent-citation upsurge that goes against usual steady decline in citation of papers.

For what it is worth, with virtually no investigation on my part, I'm willing to bet, oh, $10,000 that papers inquiring into reviewer bias have increased a lot in last few years in response to anxiety that there are group-conformity influences in the study of politically charged topics (e.g., climate change).

But if anyone else wants to share some back-of-envelope data analysis here, go for it.

Be interesting too to get some color on the sort of thinker Mahoney, who died at age 60 in 2006, was.

BTW, one of the funniest/scariest things in Mahoney's paper is his quotation of this bit of feedback on the study method, which in part involved having some reviewers read papers w/ just methods & w/o results.

"Personally, I don't see how anyone can write the Introduction and Method without first having the Results" (p. 172), the subject stated. 

It's less surprising than that an empirical researcher would feel that way than that he or she would so unself-consciously admit to a style of study presentation that is based on post-hoc story-telling.

References

Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993).


Mahoney, M. & DeMonbreun, B. Psychology of the scientist: An analysis of problem-solving bias. Cogn Ther Res 1, 229-238 (1977).  

Thursday
Jun152017

Travel journal entries

Last week & a half . . .

1. American Academy of Arts & Sciences, Cambridge, MA: Talk on misperceptions & misinformation, based on paper related to the same. Slides here.

2. World Science Festival, NYC. No slides for this panel discussion, but video here.

3. Metcalf Institute, Rhode Island. Ran through “good” and “not so good” explanations of public polarization over (certain) forms of decision-relevant science.  Great program. Slides here.

4. Judges Conference. Second Circuit, US Court of Appeals, Mohonk Mtn House, New Paltz, NY. Discussed hazards of video evidence, which not only can excite motivated reasoning but which also distinctively stifle decisionmakers’ perception of the same (some [not all] slides here).  Great presentations by co-panelists Jessica Silbey & Hany Farid.

5. Yale bioethics program lecture:   Communicating science in a polluted science communication environment.

On deck . . .

1. New perspectives on science & religion, Manchester England. Will likely highlight findings from study on relationship between religiosity, cognitive reflection & belief in human evolution.

2. Symposium: Gene Drive Modified Organisms and Practical Considerations for Environmental Risk Assessments.  My talk is tentatively titled, "Forecasting Emerging Technology Risk Perceptions: Are We There Yet?"

Tuesday
Jun132017

Science comprehension without curiosity is no virtue, and curiosity without comprehension no vice

For conference talk in Stockholm in Sept. The "without comprehension" part might be a slight exaggeration but point is that people normally recognize valid decision-relevant science w/o understanding it. 25 CCP points for first person to correctly identify allusion in title; hopefully it won't perplex Swedes too much.

It has been assumed (very reasonably) for many years that the quality of enlightened self-government demands a science-literate citizenry (e.g., Miller 1998).   Recent research, however, has shown that all manner of reasoning proficiency—from cognitive reflection to numeracy to actively open-minded thinking—magnifies politically motivated reasoning and hence political polarization on policy-relevant science (e.g., Kahan, Peters et al. 2012, 2017; Kahan 2013; Kahan & Corbin 2016).  The one science-comprehension-related disposition that defies this pattern is science curiosity, which has been shown to make citizens more amenable to engaging with evidence that challenges their political predispositions (Kahan, Landrum et al. 2017).  The presentation will review the relevant research and offer conjectures on their significance, both theoretical and practical.

References

Kahan, D.M. & Corbin, J.C. A note on the perverse effects of actively open-minded thinking on climate-change polarization. Research & Politics 3 (2016).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M., Landrum, A., Carpenter, K., Helft, L. & Hall Jamieson, K. Science Curiosity and Political Information Processing. Political Psychology 38, 179-199 (2017).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Miller, J.D. The measurement of civic scientific literacy. Public Understanding of Science 7, 203-223 (1998).

Wednesday
Jun072017

Are misconceptions of science & misinformation the *problem* in the Science Communication Problem?...

From Misconceptions, Misinformation, and the Logic of Identity-protective Cognition . . .

This paper investigates the role that “misinformation” and “misconceptions of science” play in political controversies over decision-relevant science (DRS). The surmise that their contribution is large is eminently plausible. Ordinary members of the public, we are regularly reminded (e.g., National Science Foundation 2014, 2016), display only mod7est familiarity with fundamental scientific findings, and lack proficiency in the forms of critical reasoning essential to science comprehension (Marx et al. 2007; Weber 2006). As a result, they are easily misled by special interest groups, who flood public discourse with scientifically unfounded claims on global warming, genetically modified foods, and other issues (e.g., Hmielowski et al. 2013). I will call this perspective the “public irrationality thesis” (PIT).

The unifying theme of this paper is that PIT itself reflects a misconception of a particular form of science: namely, the science of science communication. One of the major tenets of this emerging body of work is that public controversy over DRS typically originates in identity-protective cognition—a tendency to selectively credit and discredit evidence in patterns that reflect people’s commitments to competing cultural groups (Sherman & Cohen 2002, 2006). Far from evincing irrationality, this pattern of reasoning promotes the interests of individual members of the public, who have a bigger personal stake in fitting in with important affinity groups than in forming correct perceptions of scientific evidence. Indeed, the members of the public who are most polarized over DRS are the ones who have the highest degree of science comprehension, a capacity that they actively employ to form and persist in identity-protective beliefs (Kahan 2015a).

The problem, in short, is not a gullible, manipulated public; it is a polluted science communication environment. The pollution consists of antagonistic social meanings that put individuals in the position of having to choose between using their reason to discern what science knows or using it instead to express their group commitments. Safeguarding the science communication environment from such meanings, and repairing it where protective measures fail, should be the principle aim of those committed to assuring that society makes full use of the vast stock of DRS at its disposal (Kahan 2015b)....

Monday
Jun052017

Scicomm-centerism: Another “selecting on the dependent variable” saga

Okay, this is an argument that I tried to make on my Saturday panel at the World Science Festival & that went over like a fourth-dimension lead balloon. Surely, this is b/c of my own limitations as a science communicator (studying and doing are very different things!).

Cultural polarization affects science but is not about science!

This point—which I invite the 14 billion readers of this blog to help reduce to a better, more descriptive, more evocative sentence—is of vital importance to science communication because it forecloses many explanations of public science controversies and prescriptions for how they should be addressed.

As is usually the case with concepts like these, the best way to make the point is to demonstrate it.

So consider the CCP study reported in They Saw a Protest (2012)

There subjects, instructed to play the role of mock jurors in a civil trial, watched a film of a political protest that the police had broken up.  The protesters claimed they were involved in peaceful if vigorous debate protected by the First Amendment; the police, in contrast, asserted that the demonstrators had crossed the line to intimidation & coercion, which are not “free speech” for purposes of the U.S. Constitution.

There was an experimental component. We told half the subjects that the the protest occurred at an abortion clinic, and that the demonstrators opposed the rights established by Roe v. Wade and its progeny. The other half were told that the protest occurred outside a military recruitment center, and that the demonstrators were criticizing the policy of excluding openly gay and lesbian citizens from the military.

We saw big effects.

Subjects with different cultural values reported seeing different things (protesters blocking pedestrians and screaming in their faces vs. vigorous but peaceful exhortations) if they were assigned to the same condition and thus thought they were watching the same sort of protest.

Be 5000th clicker & win a prize!At the same time, subjects with the same values disagreed with one another on those same facts, and on the proper disposition of the case (order to enjoin police from interfering with future protests), if they were assigned to different experimental conditions and thus thought they were watching  different kinds of protests (anti-abortion vs. nondiscriminatory military recruitment).

These are the same groups that are divided over issues like climate change, nuclear power, fracking etc.

This is a powerful demonstration of how cultural cognation can generate culturally polarized reactions to facts.  But obviously the trigger of such conflict had nothing to do with science—nothing to do, that is, with specific science issues or with one or another group’s position on any such issue.

Because the mechanisms at work in the study (identity-protective cogntion, in particular) are the same ones at work in debates over what is known about nuclear power, climate change, fracking, the HPV vaccine etc., the study strongly suggests that scholars and activists who are centering their attention exclusively on comprehension of science are making a grievous error.

That is, if what is clearly not a disputed-science conflict provokes exactly the species of motivated reasoning that divides cultural groups on science, then it is implausible to believe that anything intrinsic to science—e.g., the “uncertainty” of science, “trust in science,” “acceptance of the authority of science” etc.—drives cultural polarization on science issues, or that remedies designed specifically to address those kinds of barriers to comprehension of science will have any effect.

This is another instance of the errors of inference one can make if one selects on the dependent variable—that is, populates his or her set of observations with ones that presuppose the truth of the hypothesis being tested.  Here, science communication scholars and practitioners are formulating their explanations of, and prescriptions for, science conflicts without reference to whether the cognitive and social dynamics in question apply in any non-science setting.

What I’m saying here does not imply there aren't solutions to public conflicts over science—only that the solutions that treat science conflicts as unique or as uniquely focused on mechanisms of comprehension of science are bound to be mistaken.

Indeed, once one recognizes that many non-science cultural conflicts exhibit exactly the same sorts of biases in factual perceptions, then one's access to potential explanations and remedies are likely to widen or in any case become much more accurate and effective—because in that case one will have access to the empirical work of scholars who’ve been studying pertinent dynamics of cultural polarization outside the setting of science controversies.

What are those researchers discovering?

That’s open to debate, of course. But in my view, they see more value in the “law of social proof”—the principle that holds that individuals will generally conform their behavior to that of others with whom they identify and whom they understand to be informed, socially competent actors.

In any case, I’m less concerned with identifying exactly what casting aside the science-controversy blinders will teach us than I am with helping science communicators to understand why they should cast the blinders aside. 

If they did that--if they jettisoned the “scicomm centerism” that is now shaping their work—what they’d be enabled to see would vastly enrich their craft.

Sunday
Jun042017

Next stop: Metcalf Institute

Next stop . . .


Saturday
Jun032017

Let's talk about science polarization . . . NY World Science Festival

Looking for something to do on Sat. evening?