Who "falls for" fake news? Apparently no one.
Thursday, October 25, 2018 at 1:17AM
Dan Kahan

A few people have asked me what I think of Pennycook, G. & Rand, D.G, “Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning,” Cognition  (2018), https://doi.org/10.1016/j.cognition.2018.06.011.

In general, I like it.  The topic is important, and the claims and analyses interesting.

But here are a couple of problems.

First, the sample is not valid. 

The study was administered to M Turk workers, who for well-known reasons are not suitable for studies of the interaction of political identity and information processing (e.g., Krupnikov & Levine 2014).

But the real problem with the sample is something even more fundamental: the subjects in the study do not represent the individuals whose behavior the paper purports to be modeling.

Exposure to “fake news” is not something that occurred with equal probability to everyone in the general population, or even to everyone on Facebook or Twitter. Indeed, it was concentrated in a relatively small group of highly conservative individuals (Guess, Nyhan & Reifler 2018).

If one wants to draw inferences, then, about “who falls for fake news” in the real world, one needs to sample from that segment of the population. Its members necessarily share some distinctive disposition to consume an unusual form of political communication (or miscommunication).  It is conceivable that motivated reasoning figures in the propensity of this class’s members to “fall for” fake news even if it doesn’t in a convenience sample whose members have been recruited without regard for this distinctive disposition.

Second, the data do not support the key inference that P&R draw.

P&R conclude that people who score low on a variant of the Cognitive Reflection Test were more likely to “fall for” fake news.  But in fact, their own evidence shows that no one was falling for fake news:

 

As this Figure demonstrates, the difference between subjects scoring low on CRT ("intuitive") and those scoring high ("deliberative") related only to the reported intensity with which subjects of those types rated fake news as lacking accuracy.

Underscores the lesson that a “significant” correlation can be insufficient to justify an inference from the data where the variance explained occurs over a range inconsistent with the study hypothesis (Dixon & Jones 2015).

Refs

Dixon, R.M. & Jones, J.A. Conspiracist Ideation as a Predictor of Climate-Science Rejection: An Alternative Analysis. Psychol Sci 26, 664-666 (2015).

Guess, A., Nyhan B. & Reifler J. Selective Exposure to Misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign. Working paper (2018), at http://www.dartmouth.edu/~nyhan/fake-news-2016.pdf.

Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014).

 

Article originally appeared on cultural cognition project (http://www.culturalcognition.net/).
See website for complete article licensing information.