Just when I thought I finally had gotten the infernal “asymmetry thesis” (AT) out of my system once and for all, this hobgoblin of the science communication problem has re-emerged with all the subtlty and charm of a bad case of shingles.
AT, of course, refers to the claim that ideologically motivated reasoning (of which cultural cognition is one species or conception), is not “symmetric” across the ideological spectrum (or cultural spectra) but rather concentrated in individuals of a right-leaning or conservative (or in cultural cognition terms “hierarchical”) disposition.
It is most conspicuously associated with the work of the accomplished political psychologist John Jost, who fnds support for it in the correlation between conservatism and various self-report measures of “dogmatic” thinking. It is also the animating theme of Chris Mooney’s The Republican Brain, which presents an elegant and sophisticated synthesis of the social science evidence that supports it.
I don’t buy AT. I’ve explained why 1,312 times in previous blogs, but basically AT doesn’t cohere with the best theory for politically motivated reasoning and is not supported — indeed, is at odds with — the best evidence of how this dynamic operates.
The best theory treats politically motivated reasoning as a form of identity-protective cognition.
People have a big stake–emotionally and materially–in their standing in affinity groups consisting of individuals of like-minded goals and outlooks. When positions on risks or other policy relevant-facts become symbolically identified with membership in and loyalty to those groups, individuals can thus be expected to engage all manner of information–from empirical data to the credibility of advocates to brute sense impressions–in a manner that aligns their beliefs with the ones that predominate in their group.
The kinds of affinity groups that have this sort of significance in people’s lives, however, are not confined to “political parties.” People will engage information in a manner that reflects a “myside” bias in connection with their status as students of a particular university and myriad other groups important to their identities.
Because these groups aren’t either “liberal” or “conservative”–indeed, aren’t particularly political at all–it would be odd if this dynamic would manifest itself in an ideologically skewed way in settings in which the relevant groups are ones defined in part by commitment to common political or cultural outlooks.
The proof offered for AT, moreover, is not convincing. Jost’s evidence, for example, doesn’t consist in motivated-reasoning experiments, any number of which (like the excellent ones of Jarret Crawford and his collaborators) have reported findings that display ideological symmetry.
Rather, they are based on correlations between political outlooks and self-report measures of “open-mindedness,” “dogmatism” & the like.
These measures –ones that consist, literally, in people’s willingness to agree or disagree with statements like “thinking is not my idea of fun” & “the notion of thinking abstractly is appealing to me”–are less predictive of the disposition to critically interrogate one’s impressions based on available information than objective or performance-based measures like the Cognitive Reflection Test and Numeracy. And thse performance-based measures don’t meaningfully correlate with political outlooks.
In addition, while there is plenty of evidence that the disposition to engage in reflective, critical reasoning predicts resistance to a wide array of cognitive bias, there is no evidence that these dispositions predict less vulnerability to politically motivated reasoning.
On the contrary, there is mounting evidence that such dispositions magnify politically motivated reasoning. If the source of this dynamic is the stake people have in forming beliefs that are protective of their status in groups, then we might expect people who know more and and are more adept at making sense of complex evidence to use these capacities to promote the goal of forming identity-protective beliefs.
CCP studies showing that cultural polarization on climate change and other contested risk issues is greater among individuals who are higher in science comprehension, and that individuals who score higher on the Cognitive Reflection Test are more likely to construe evidence in an ideologically biased pattern, support this view.
The Motivated Numeracy experiment furnishes additoinal support for this hypothesis. In it, we instructed subjects to perform a reasoning task–covariance detection–that is known to be a highly discerning measure of the ability and disposition of individuals to draw valid causal inferences from data.
We found that when the problem was styled as one involving the results of an experimental test of the efficacy of a new skin-rash treatment, individuals who score highest in Numeracy– a measure of the ability to engage in critical reasoning on matters involving quantitative information–were much more likely to corretly interpret that data than those who had low or modest Numeracy scores.
But when the problem was styled as one involving the results of gun control ban, those subjects highest in Numeracy did betteronly when the data presented supported the result (“decreases crime” or “increases crime”) that prevails among persons with their political outlooks (liberal Democrats and conservative Republicans, respectively). When the data, properly construed, threatened to trap them in a conclusion at odds with their political outlooks, the high Numeracy people either succumbed to a tempting but lotically specious response to the problem or worked extra hard to pry open some ad hoc, confabulatory escape hatch.
As a result, higher Numeracy experiment subjects ended up even more polarized when considering the same data — data that in fact objectively supported one position more strongly than the other — than subjects who subjects who were less adept at making sense of empirical information.
But … did this result show an ideological asymmetry?!
Lots of people have been telling me they see this in the results. Indeed, one place where they are likely to do so is in workshops (vettings of the paper, essentially, with scholars, students and other curious people), where someone will almost say, “Hey, wait! Aren’t conservative Republicans displaying a greater ‘motivated numeracy’ effect than liberal Democrats? Isn’t that contrary to what you said you found in x paper? Have you called Chris Mooney and admitted you were wrong?”
At this point, I feel like I’m talking to a roomful of people with my fly open whenver I present the paper!
In fact, I did ask Mooney what he thought — as soon as we finished our working paper. I could see how people might view the data as displaying an asymmetry and wondered what he’d say.
His response was “enh.”
He saw the asymmetry, he said, but told me he didn’t think it was all that interesting in relation to what the study suggested was the extent of the vulnerability of all the subjects, regardless of their political outlooks, to a substantial degradation in reasoning when confronted with data that disappointed their political predispositions–a point he then developed in an interesting Mother Jones commentary.
That’s actually something I’ve said in the past, too–that even if there were an “asymmetry” in politically motivated reasoning, it’s clear that the problem is more than big enough for everyone to be a serious practical concern.
Well, the balanced, reflective person that he is, Mooney is apparently able to move on, but I, in my typical OCD-fashion, can’t…
Is the asymmetry really there? Do others see it? And how would they propose that we test what they think they see so that they can be confident their eyes are not deceiving them?
The location of the most plausible sighting–and the one where most people point it out–is in Figure 6, which presents a lowess plot of the raw data from the gun-control condition of the experiment:
What this shows, essentially, is that the proportion of the subjects (about 800 of them total) who correctly interpreted the data was a function of both Numeracy and political outlook. As Numeracy increases, the proportion of subjects selecting the correct answer increases dramatically but only when the correct answer is politically congenial (“decreases crime” for liberal Democrats, and “increases crime” for conservative Republicans; subjects’ political outlooks here are determined based on the location of their score in relation to the mean on a continuous measure that combined “liberal-conservative” ideology & party identification).
But is there a difference in the pattern for liberal Democrats, on the on hand, and conservative Republicans, on the other?
Those who see the asymmetry tend to point to the solid black circle. There, in middling range of Numeracy, conservative Republicans display a difference in their likelihood of getting the correct answer based on which experiment condition (“crime increases” vs. “crime decreases”), but liberal Democrats don’t.
A ha! Conservative Republicans are displaying more motivated reasoning!
But consider the dashed circle to the right. Now we can see that conservative Republicans are becoming slightly more likely to interpret the data correctly in their ideologically uncongenial condition (“crime decreases”) — whereas liberal Democrats aren’t budging in theirs (“crime increases”).
A ha^2! Liberal Democrats are showing more motivated Numeracy–the disposition to use quantitative reasoning skills in an ideologically selective way!
Or we are just looking at noise. The effects of an experimental treatment will inevitably be spread out unevenly across subjects exposed to it. If we split the sample up into parts & scrutinize the effect separately in each, we are likely to be mistake random fluctuations in the effect for real differences in effect among the groups so specified.
For that reason, one fits to the entire dataset a statistical model that assumes the treatment has a particular effect–one that informed the experiment hypothesis. If the model fits the real data well enough (as reflected in conventional standards like p < 0.05), then one can treat what one sees — if it looks like what one expected — as a corroboration of the study prediction.
We fit a multivariate regression model to the data that assumed the impact of politically motivated reasoning (reflected in the difference in likelihood of getting the answer correct conditional on its ideological congeniality) would increase as subjects’ Numeracy increases. The model fit the data quite well, and thus, for us, corroborated the pattern we saw in Figure 6, which is one in which politically motivated reasoning and Numeracy interact in the manner hypothesized.
The significance of the model is hard to extract from the face of the regression table that reports it, but here is a graphical representation of what the model predicts we should see among subjects of different political outlooks and varying levels of Numeracy in the various experimental conditions:
The “peaks” of the density distributions are, essentially, the point estimates of the model, and the slopes of the curves (their relative surface area, really) a measure of the precision of those estimates.
The results display Motivated Numeracy: assignment to the “gun control” conditions creates political differences in the likelihood of getting the right answer relative to the assignment to the “skin treatment” conditions; and the size of those differences increases as Numeracy increases.
Now you might think you see asymmetry here too! As was so for figure depicting the raw data, this Figure suggests that low Numeracy conservative Republicans’ performance is more sensitive to the experimental assignment. But unlike the raw-data lowess plot, the plotted regression estimates suggest that the congeniality of the data had a bigger impact on the performance of higher Numeracy conservative Republicans, too!
But this is not a secure basis for inferring asymmetry in the data.
As I indicated, the model that generated these predicted probabilities included parameters that corresponded to the prediction that political outlooks, Numeracy, and experimental condition would all interact in determining the probability of a correct response. The form of the model assumed that the interaction of Numeracy and political outlooks would be uniform or symmetric.
The model did generate predictions in which the difference in the impact of politically motivated reasoning was different for conservative Republicans and liberal Democrats at low and high levels of Numeracy.
But that difference is attributable — necessarily — to other parameters in the model, including the point along the Numeracy scale at which the probability of the correct answer changes dramatically (the shape of the “sigmoid” function in a logit model), and the tendency of all subjects, controlling for ideology, to get the right answer more often in the “crime increases” condition.
I’m not saying that the data from the experiment don’t support AT.
I’m just saying that to support the inference that it does, one would have to specify a statistical model that reflected the hypothesized asymmetry and see whether it fits the data better than the one that we used, which assumes a uniform or symmetric effect.
I’m willing to fit such a model to the data and report the results. But first, someone has to tell me what that model is! That is, they have to say, in conceptual terms, what sort of asymmetry they “see” or “predict” in this experiment, and what sort of statistical model reflects that sort of pattern.
Then I’ll apply it, and announce the answer!
If it turns out there is asymmetry here, the pleasure of discovering the world is different from what I thought will more than offset any embarrassment associated with my previously having announced a strong conviction that AT is not right.
So– have at it!
To help you out, I’ve attached a slide show that sketches out seven distinct possible forms of asymmetry. So pick one of those or if you think there is another, describe it. Then tell me what sort of adjustment to the regression model we used in Table 1 would capture an asymmetry of that sort (if you want to say exactly how the model should be specified, great, but also fine to give me a conceptual account of what you think the model would have to do to capture the specified relationship between Numeracy, political outlooks, and the experimental conditions).
Of course, the winner(s) will get a great prize! Winning, moreover, doesn’t consist in confirming or refuting AT; it consists only in figuring out a way to examine this data that will deepen our insight.
In empirical inquiry, it’s not whether your hypothesis is right or wrong that matters; it’s how you extract a valid inference from observation that makes it possible to learn something.