*See* "cognitive reflection" *magnify* (ideologically symmetric) motivated reasoning ... (not for faint of heart)
So this is in the category of "show me the data, please!"
I'm all for statistical models to test, discipline, and extend inference from experimental (or observational) data.
But I'm definitely against the use of models in lieu of displaying raw data in a manner that shows that there really is a prospective inference to test, discipline, and extend.
Statistics are a tool to help probe and convey information about effects captured in data; they are not a a device to conjure effects that aren't there.
They are also a device to promote rather than stifle critical engagement with evidence. But that's another story--one that goes to effective statistical modeling and graphic presentation.
The point I'm making now, and have before, is that researchers who either present a completely perfunctory summary of the raw data (say, a summary of means for an arbitrarily selected number of points for continuous data) or simply skip right over summarizing the raw data and proceed to multivariate modeling are not furnishing readers with enough information to appraise the results.
The validity of the modeling choice in the statistical analysis--and of the inferences that the model support--can't be determined unless one can *see* the data!
Like I said, I've made that point before.
And all of this as a wind up for a simple "animated" presentation of the raw data from one CCP study, Kahan, D.M., Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).
That study featured an experiment to determine how the critical reasoning proficiency measured by the Cognitive Reflection Test (CRT) interacts with identity-protective reasoning--the species of motivated reasoning that consists in the tendency of individuals to selectively credit or discredit data in a manner that protects their status within an identity-defining affinity group.
The experiment involved, first, having the subjects take the CRT, a short (3-item) performance based measure of their capacity and disposition to interrogate their intuitions and preconceptions when engaging information.
It's basically considered the "gold standard" for assessing vulnerability to the sorts of biases that reflect overreliance on heuristic information processing. With some justification, many researchers also think of it as a measure of how willing people are to open-mindedly revise their beliefs in light of empirical evidence, a finding that is at least modestly supported by several studies of how CRT and religiosity interact.
I've actually commented a bit on what I regard as the major shortcoming of CRT: it's too hard, and thus fails to capture individual differences in the underlying critical reasoning disposition among those who likely are in the bottom 50th percentile with respect to it. But that's nit picking; it's a really really cool & important measure, and vastly superior to self-report measures like "Need for Cognition," "Need for Closure" and the like.
After the taking the test, subjects were divided into three treatment groups. One was a control, which got information that explained social psychologists had collected data and concluded that the CRT was a valid measure of "open-minded and reflective" a person is.
Another was the "believer scores higher" condition: in that one, subjects were told in addition that individuals who believe in climate change have been determined to score higher on the CRT.
Finally there was the "skeptic scores higher" condition: in that one, subjects were told that individuals who are skeptical of climate change have been found to score higher.
Subjects in all three conditions then indicated whether they thought of the validity of the CRT by indicating how strongly they agreed or disagreed with the statement "I believe the word-problem test that I just took supplies good evidence of how reflective and open-mined a person is."
Because belief in climate change is associated with membership in identity-defining cultural groups that are indicated by political outlooks (and of course even more strongly by cultural worldviews), one would expect identity-protective reasoning to unconsciously motivated individuals to selectively credit or dismiss the information on the validity of the CRT conditional on whether they had been advised that it showed that individuals who subscribed to their group's position on climate change were more or less "reflective" and "open-minded" than those who subscribed to the rival group's position.
The study tested that proposition, then.
But it also was designed to pit a number of different theories of motivated reasoning against each other, including what I called the "bounded rationality thesis" (BRT) and the "ideological asymmetry thesis" (IAT).
BRT sees motivated reasoning as just another one of the cognitive biases assocaited with over-reliance on heuristic rather than effortful, conscious information-processing. It thus should predict that identity-protective reasoning, as measured in this experiment, will be lower in individuals score higher in CRT.
IAT, in contrast, attributes politically motivated reasoning to a supposedly dogmatic reasoning style (one supposedly manifested by self-report measures of the sort that are vastly inferior to CRT) on the part of individuals who are politically conservative. Because CRT has been used as a measure of open-minded engagement with evidence (particularly in studies of religiosity), IAT would predict that motivated reasoning ought to be more pronounced among conservatives than among liberals.
The third position was the "expressive rationality thesis" (ERT). ERT posits that it is individually rational, once positions on disputed risks and comparable facts have acquired a social meaning as badges of membership in and loyalty to a self-defining affinity group, to process information about societal risks (ones their individual behavior can't affect meaningfully anyway) in a manner that promotes beliefs consistent with the ones that predominate in their group. That kind of reasoning style will tend to make the individuals who engage in it fare better in their everyday interactions with peers--notwithstanding its undesirable social impact in inhibiting diverse democratic citizens from converging on the best available evidence.
Contrary to IAT, ERT predicts that identity-protective reasoning will be ideologically symmetric. Being "liberal" is an indicator of being a member of an identity-defining affinity group just as much as being "conservative" is, and thus furnishes the same incentive in individual group members to process information in a manner that promotes status-protecting beliefs in line with those of other group members.
Contrary to BRT and IAT, ERT predicts that this identity-protective reasoning effect will increase as individuals become more proficient in the sort of critical reasoning associated with CRT. Because it is perfectly rational--at an individual level--for individuals to process information relevant to social risks and related issues in a manner that protects their status within their identity-defining affinity groups, those who possess the sort of reasoning proficiency associated with CRT can be expected to use it to do that even more effectively.
The experiment supported ERT more than BRT or IAT.
When I say this, I ought to be able to enable you to see that in the raw data!
By "raw data," I mean the data before it has been modeled statistically. Obviously, to "see" anything in it, one has to arrange the raw data in the manner that makes it admit of visual interpretation.
So for that purpose, I plotted the subjects (N = 1750) on a grid comprising their "right-left" political outlooks (as measured with a composite scale that combined their responses to a conventional 7-point party self-identification measure and a 5-point liberal-conservative ideology measure) on the x-axis and their assessment of the CRT as measured by the 6-point "agree-disagree" outcome variable on the y-axis.
There are, unfortunately, too many subjects to present a scatterplot: the subjects would end up clumped on top of each other in blobs that obscured the density of observations at particular points, a problem called "overplotting."
But "lowess" or "locally weighted regression" is a technique that allows one to plot the relative proportions of the observations in relation to the coordinates on the grid. Lowess is a kind of anti-model modeling of the data; it doesn't impose any particular statistical form on the data but in effect just traces the moving average or proportion along tiny increments of the x-axis.
Plotting a lowess line faithfully reveals the tendency in the data one would be able to see with a scatterplot but for the overplotting.
Okay, so here I've created an animation that plots the lowess regression line successively for the control, the "believer scores higher," and the "skeptic scores higher" conditions:
What you can see is that there is essentially no meaningful relationship between the perceived validity of CRT and political outlooks in the "control" condition.
In "believer scores higher," however, the willingness of subjects to credit the data slopes downward: the more "liberal, Democratic" subjects are, the more they credit it, while the more "conservative, Republican" they are the less they do so.
Likewise, in the "skeptics score higher" condition, the willingness of subjects to credit the data slopes upward: the more "liberal, Democratic" subjects are, the more they credit it, while the more "conservative, Republican" they are the less they do so.
That's consistent with identity-protective reasoning.
All of the theories--BRT, IAT, and ERT predicted that.
But IAT predicted the effect would be asymmetric with respect to ideology. Doesn't look that way to me...
Now consider the impact of the experimental in relation to scores on CRT. This animation plots the effect of ideology on the perceived validity of the CRT separately for subjects based on their own CRT scores (information, of course, with which they were not supplied):
What you can see is that the steepness of the slopes is intensifying--the relative proportion of subjects who are moving in the direction associated with identity-protective reasoning getting larger--as CRT goes from 0 (the minimum score), to 0.65 (the sample mean), to 1 (about 80th percentil) to >1 (approximately 90th percentile & above).
That result is inconsistent with BRT, which sees motivated reasoning as a product of overreliance on heuristic reasoning, but consistent with ERT, which predicts that individuals will use their cognitive reasoning proficiencies to engage in identity-protective reasoning.
The equivalent of these "raw data" summaries appear in the paper--although they aren't animated, which I think is a shame!
So that's that.
Or not really. That's what the data look like--and the inference that they seem to support.
To discipline and extend those inferences, we can now fit a model.
I applied an ordered logistic regression to the experimental data, one the results of which confirmed that the observed effects were "statistically significant." But because the regression output is also not particularly informative to a reflective person trying to understand the practical effect of the data, I also used the model to predict the impact of the experimental assignment typical partisans (setting the predictor levels at "liberal Democrat" and "conservative Republican," respectively) and for both "low CRT" (CRT=0) and "high CRT" (CRT=2).
Not graphically reporting multivariate analyses--leaving readers staring a columns of regression coefficients with multiple asterisks, the practical import of which is indecipherable even to someone who understands what the output means--is another thing that researchers shouldn't do.
But even if they do a good job graphically reporting their statistical model results, they must first show the reader that that raw data support the inferences that the model is being used to test or discipline and refine.
Otherwise there's no way to know whether they modeling choice is valid -- and no way to assess whether the results support the conclusion the reproacher has reached.