What does & doesn't count as valid evidence of "ideologically motivated reasoning" & "symmetry" of the same
A friend wrote to me posing a question, implicit in which are a bunch of issues about what counts as valid evidence of motivated reasoning & the symmetry of it across the left-right "spectrum" -- or cultural worldview spectra.
I've addressed many of these issues, most more than once, in various posts.
Indeed, I have on many occasions noted that almost everything I say is something I've said before, including that I've already noted on many occasions that everything I have to say is something I seem already to have said before. I think this is just sort of normal actually, when one is engaged in a sort "rolling conversation" w/ a fuzzy set of participants whose aim is reciprocal exchange of ideas for mutual enlightenment, & I should just stop talking about this.
That's the first time I've said that, but I'm sure not the last....
But in any case, I thought I'd share this particular exchange. Maybe it will be clearer or more accessible than some of the others or simply increase the likelihood that someone who can get value from my views (very possibly by being able to see more clearly why he or she thinks I've made an error!) will find & get value out of these reflections on the nature of what sorts of study designs support inferences on "ideologically motivated reasoning" and asymmetry.
I want to say that your research has found that more numerate people are more biased on both ends of the political spectrum, but my recollection is that what you find is actually that more numerate people do not believe more in the reality of climate change. My question is: Have you looked at the interaction – e.g., done a median split on numeracy and then compared the polarization graphs between the numerate and innumerate?
I don't think what you've asked me to show you can support the inference that any particular form of reasoning proficiency ("science literacy," "numeracy," "cognitive reflection" etc.) magnifies ideologically motivated reasoning "symmetrically" (let's say) with respect to ideology.
But I'll show you what you asked to see first, before explaining why.
A. "Looking at" the magnification of polarization conditional on science comprehension
There are a variety of ways to graphically display what you are asking for--a view, essentially, of the differential in polarization at different levels of reasoning proficiency.
I think what I've done below -- splitting the "ideology" predictor & doing a lowess for each 1/2 of the sample separately in relationship to science comprehension -- is the best, or better in any case than splitting both continuous measures and giving you two pairs of %'s (for left-leaning & right-leaning at "below" & "above" avg); this way you get the information benefit of the continuous science-comprehension measure.
These are the same data from your slide 7.
With the lowess, one can see pretty readily that the gap between "left-" & "right-leaning" respondents gets progressively larger from -1 SD (16th percentile) to +1 SD (84th) on OSI_2.0 & then pretty much levels out.
(As you know, "OSI_2.0" is a 1-dimensional science-comprehension measure that consists of "science literacy," numeracy & cognitive reflection items. It was formed using a 2PL Item response theory model. For details, see 'Ordinary Science Intelligence': A Science Comprehension Measure for Use in the Study of Science Communication, with Notes on 'Belief in' Evolution and Climate Change.).
B. But what are we looking at?
So if one is trying to get the practical point across, this justifies saying things like, "polarization tends to increase, not abate, as individual with opposing political or cultural outlooks become more proficient in making sense of scientific information" etc.
But one can't on this basis infer that motivated reasoning is being magnified conditional on reasoning proficiency-- or as you put it, that "more numerate people are more biased on both ends of the political spectrum."
The question-- is human-caused climate change occurring?-- is a factual one that presumably has a correct answer. Thus, one "side" -- "liberals" or "conservatives" -- is presumably becoming more likely to get the correct answer as reasoning proficiency increases.
It is thus possible that motivated reasoning is increasing conditional on reasoning proficiency for the side that is becoming more likely to get the "wrong answer" but dissipating conditional on reasoning proficiency for the side that is becoming more likely to get the right answer!
That inference isn't logically compelled, of course. If one is predisposed to believe something that happens to be true, then motivated reasoning will increase the likelihood of "getting the right answer."
But in that case, your getting the right answer won't prove you are smart; it will show only that you were lucky, at least on that particular issue.
The point, though, is that the evidence we are looking at above is equally consistent with the inference that motivated reasoning is being magnified by enhanced reasoning proficiency and the inference that ideologically motivated reasoning is "asymmetric" with respect to ideology.
C. Observing what we really are trying to figure out
There is, I think, only one way to determine whether greater polarization conditional on greater reasoning proficiency is being caused by an ideologically symmetric (more or less) magnification of motivated reasoning: by looking at how people reason independently of whether they are getting the right answer.
What we need to see is how biased or unbiased the reasoning of those on both "sides" is as each side's members display greater reasoning proficiency.
I'll show you results from two studies that bear on this.
1. Motivated system 2 reasoning
In the first (Kahan 2013), subjects evaluated "evidence" of the validity of the cognitive reflection test as a measure of "reflective & open-minded" reasoning. The experimental manipulation was the representation that those who score higher are more likely or instead less likely to accept evidence of human-caused global warming.
One might like to use a valid test of reflective reasoning (particularly an objective, performance based one like CRT, say, as opposed to the self-reporting ones like "need for cognition" that a fair number of researchers persist in using despite their dubious validity) to test the oft-asserted claim that "right-leaning" individuals are more "dogmatic" and "closed minded" etc. than "left-leaning" ones.
But if one is moved to selectively credit or discredit evidence of the validity of an open-mindedness test based on whether it supports the conclusion that those who share one's own ideology are "more open-minded and reflective" than are one's ideological opposites, one's own ideologically motivated reasoning will prevent one from being able to carry out such a test.
Indeed, if both "left-leaning" and "right-leaning" individuals display this effect, then we can conclude that both side are disposed to be unreflective & closed-minded in their thinking on this very issue.
That's what we see, of course:
How likely subjects were to credit the evidence that the CRT was "valid" in this study was conditional on the interaction of experimental treatment and ideology: that is, the more conservative one becomes, the more likely one is to conclude the CRT is valid in the "believers score lower" condition and invalid in "skeptics score lower" one; vice versa as people become more liberal.
This inference doesn't depend on the CRT being an "open-mindedness" measure: the design would have worked just as well if the measure whose validity was being assessed was a fictional "open-mindedness" one.
Nevertheless, the effect observed is conditional on the subjects' CRT scores: the higher their score, the more likely they were to display this bias.
Accordingly, we can see that the form of critical-reasoning proficiency measured by CRT is indeed magnifying ideologically biased information processing.
2. Motivated numeracy
In the second study (Kahan et al. 2013), subjects performed a standard "covariance detection" problem.
The problem is diagnostic of a tendency to over-rely on heuristic reasoning of the sort that involves trying to infer causation from examining the differential between the number of "got better" & "got worse" in only the fictional-study "treatment" condition. That approach generates the wrong answer.
As expected, in the control condition of our study (effectiveness of a new skin-rash treatment), there was a very strong correlation between correctly answering the problem and numeracy (indeed, one has to be at about the 90th percentile before there is even a 70% chance of getting it right).
But in the treatment condition of our study -- in which the outcome of a "gun control ban" experiment was manipulated-- subjects higher in numeracy were more likely to get the correct answer only if the data, properly interpreted, supported the position consistent with their own ideology.
So again, we see that critical reasoning is being used opportunistically: evidence is being assessed correctly depending on whether it supports an ideologically congenial outcome.
We see, too, that critical reasoning is magnifying this effect; subjects who are higher in numeracy are reliably summoning their greater proficiency to resist heuristic reasoning only when heuristic reasoning generates an uncongenial answer.
And finally, we are seeing that the effect is symmetric. Or in any case, we see it in both the left and right. (If you want to discuss whether it is "bigger" in the right, I'm happy to go into that; the answer is actually "no"-- although many people perceive that)!
3. Lesson: Manipulate motivating stake, measure "likelihood ratio"
Note that in both of these experiments, the "high proficiency" are "more polarized" than their "low proficiency" counterparts. They are "better," more reliable, in fitting the evidence to their ideological predispositions.
If this is how people process information in the world, then we will see, in an observational study, that those higher in proficiency are more polarized. We'll be able to infer, moreover, that this is the result of the magnification of biased information processing & not a result of one "side" getting the "right answer."
Indeed, the whole point of the experimental design was to unconfounding quality of reasoning from "right answer." This was done, in effect, by manipulating the motivational stake of the subjects to credit one and the same piece of evidence.
In Bayesian terms, we are demonstrating that subjects opportunistically adjust the likelihood ratio in response to an identity-protective motivation unrelated to assessing the truth-value weight of the new information (Kahan 2015).
This is a more general point: studies that purport to show "motivated reasoning" or "biased assimilation" by looking at the equivalent of "posterior probabilities" are almost always invalid. They are consistent with the inference that "one side is biased," or even that "neither side is," because differences in opinion after review of evidence is consistent with different priors or pre-treatment effects (consideration of the evidence in question or its equivalent) before the study. One should manipulate the stake the subjects have in the outcome & assess how that effects the likelihood ratio assigned to one and the same piece of evidence -- conceptually speaking (Kahan 2013).
Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication, with Notes on “Belief in” Evolution and Climate Change. Cultural Cognition Project Working Paper No. 112 (2014).