Key Insight
Political psychologist Brendan Nyhan and his collaborators Jason Reifler & Peter Ubel just published a really cool paper in Medical Care entitled “The Hazards of Correcting Myths About Health Care Reform.” It shows just how astonishingly resistant the disease of ideologically motivated reasoning is to treatment with accurate information. And like all really good studies, it raises some really intersting questions. ... Read more
Political psychologist Brendan Nyhan and his collaborators Jason Reifler & Peter Ubel just published a really cool paper in Medical Care entitled “The Hazards of Correcting Myths About Health Care Reform.” It shows just how astonishingly resistant the disease of ideologically motivated reasoning is to treatment with accurate information. And like all really good studies, it raises some really intersting questions.
NRU conducted an experiment on the effect of corrections of factually erroneous information originating from a partisan source. Two groups of subjects got a news article that reported on false assertions by Sarah Palin relating to the role of “death panels” in the Obamacare national health plan. One group received in addition a news story that reported that “nonpartisan health care experts have concluded that Palin was wrong.” NRU then compared the perceptions of the two groups.
Well, one thing they found is that the more subjects liked Palin, the more likely they were to believe Palin’s bogus “death panel” claims. Sure, not a big surprise.
They also found that the impact of being showing the “correction” was conditional on how much subjects liked Palin: the more they liked her, the less they credited the correction. Cool, but again not startling.
What was mind-blowing, however, was the interaction of these effects with political knowledge. As subjects became more pro-Palin in their feelings, high political knowledge subjects did not merely discount the “correction” by a larger amount than low political knowledge ones. Being exposed to the “nonpartisan experts say Palin wrong” message actually made high-knowledge subjects with pro-Palin sentiments credit her initially false statements even more strongly than their counterparts in the “uncorrected” or control condition!
The most straightforward interpretation is that for people who have the sort of disposition that “high political knowledge” measures, the “fact check”-style correction itself operated as a cue that the truth of Palin’s statements was a matter of partisan significance, thereby generating unconscious motivation in them to view her statements as true.
There was already plenty of reason to believe that just bombarding people with more and more “sound information” doesn’t neutralize polarization on culturally charged issues like climate change, gun control, nuclear power, etc.
There was also plenty of reason to think that individuals who are high in political knowledge are especially likely to display motivated reasoning and thus to be especially resistant to a simple “sound information” bombardment strategy.
But what NRU show is that things have become so bad in our polarized society that trying to correct partisan-motivated misperceptions of facts can actually make things worse! Responding to partisan misinformation with truth is akin to trying to douse a grease fire with water!
But really, I’d say that the experiment shows only potentially how bad things can get.
First, the NRU experimental design, like all experimental designs, is a model of real-world dynamics. I’d say the real-world setting it is modeling is one in which an issue is exquisitely fraught; Palin & Obamacare are each flammable enough on their own, so when you mix them together you’ve created an atmosphere just a match strike away from an immense combustion of ideologically motivated reasoning.
Still, there is plenty of reason to believe that there are conditions, issues, etc. like that in the world. So the NRU model gives us reason to be very wary of rushing around trying to expose “lies” as a strategy for correcting misinformation. At least sometimes, the the study cautions, you could be playing right into the misinformer’s hands.
Actually, I think that this is the scenario on the mind of those who’ve reacted negatively to the proposed use of climate change “truth squads”—SWAT teams of expert scientists who would be deployed to slap down every misrepresentation made by individuals or groups who misrepresent climate science. The NRU study gives more reason to think those who didn’t like this proposal were right to think this device would only amplify the signal on which polarization feeds.
Second, interpreting NRU, however, depends in part on what is being measured by “political knowledge.”
Measured with a civics quiz, essentially, “political knowledge” is well-known to amplify partisanship .
The usual explanation is that people who are “high” in political knowledge literally just know more and hence assign political significance to information in a more accurate and reliable way. This by itself doesn’t sound so bad. People’s political views should reflect their values, and if getting the right fit requires information, then the “high” political knowledge individuals are engaged in better reasoning. Low-knowledge people bumble along and thus form incoherent views.
But that doesn’t seem satisfying when one examines how political knowledge can amplify motivated reasoning. When people engage in ideologically motivated reasoning, they give information the effect that gratifies their values independently of whether doing so generates accurate beliefs. Why would knowing more about political issues make people reason in this biased way?
Another explanation would be that “political knowledge” is actually measuring the disposition to define oneself in partisan terms. In that case, it would make sense to think of high knowledge as diagnostic or predictive of vulnerability to ideologically motivated reasoning. People with strong partisan identities are the ones who experience strong unconscious motivation to us e what they know in a way that reinforces conclusions that are ideologically congenial.
Moreover, in that case, being low in “political knowledge” arguably makes one a better civic reasoner. Because one doesn’t define oneself so centrally with respect to one’s ideology or party membership, one gives information an effect that is more reliably connected to its connection to truth. Indeed, in NRU the “low knowledge” subjects seemed to be responding to “corrections” of misinformation in a normatively more desirable way—assuming what we desire is the reliable recognition and open-minded consideration of valid evidence.
I would say that the “partisan identity” interpretation of political knowledge is almost certainly correct, but that the “ knows more, reasons better ” interpretation is likely correct too. The theoretical framework that informs cultural cognition asserts that it is rational for people to regard politically charged information in a manner that reliably connects their beliefs to those that predominate in their group because the cost of being “out of synch” on a contentious matter is likely to be much higher than the cost of being “wrong”—something that on most political issues is costless to individuals, given how little impact their personal beliefs have on policymaking. If so, then, we should expect people who “know more” and “reason better” to be more reliable in “figuring out” what the political significance of information is—and thus more likely to display motivated reasoning.