A while back I did a couple of posts (here & here) on Nam, H.H., Jost, J.T. & Van Bavel, J.J. “Not for All the Tea in China!” Political Ideology and the Avoidance of Dissonance, PLoS ONE 8(4) 8, doi:59810.51371/journal.pone.0059837 (2013).
NJV-B requested subjects (Mechanical Turk workers; more on that presently) to write “counter-attitudinal essays”—ones that conflicted with the positions associated with subjects’ self-reported ideologies—on the relative effectiveness of Democratic and Republican Presidents. They found that Democrats were "significantly" more likely to agree to write an essay comparing Bush II favorably to Obama or Reagan favorably to Clinton than Republicans were to write onecomparing Obama favorably to Bush II or Clinton favorably to Reagan.
NJV-B interpreted this result as furnishing support for the "asymmetry thesis," the proposition that ideologically motivated reasoning is disproportionately associated with a right-leaning or conservative ideology. The stronger aversion of Republicans to writing counter-attitudinal essays, they reasoned, implied greater resistance on their part to reflecting on and engaging evidence uncongenial to their ideological predispositions.
I wrote a post explaining why I thought the design was a weak one.
Well, now Mark Brandt & Jarret Crawford have released a neat working paper that reports a replication study.
They failed to replicate NJV-B result. That is, they found that the subjects' willingness to write a counter-attitudinal essay was not correlated with their ideological dispositions.
That's interesting enough, but the paper also has some great stuff in it on other potential dispositional influences on the subjects' assent to write counter-attitudinal essays.
They found, e.g., that the subjects' score on a "confidence in science" measure did predict their willingness to write counter-attitudinal essays.
The also found that "need for closure"-- a self-report measure of cognitive style that consists of agree-disagree items such as "When thinking about a problem, I consider as many different opinions on the issue as possible" -- did not predict any lesser or greater willingness to advocate for the superiority of the "other side's" Presidents.
These additional findings are relevant to the discussion we've been having about dispositions that might counteract the "conformity" effects associated with cultural cognition & like forms of motivated reasoning.
One shortcoming -- easily remedied -- relates to BC's reporting of their results. There are some cacophonous bar charts that one can inspect to see the impact (or lack thereof) of ideology on the subjects' willingeness to write counter-attitudinal essays.
But the magnitudes of the other reproted effects are not readily discernable. In the case of the "confidence in science" result, the authors report only a logit coefficient for an interaction term (in a regression model the full output for which is not reported). Even people who know what a logit coefficient is won't be able to gauage the practical significance of a result reported in this fashion (& what a shame to relate one's findings exclusively in a metric only those who "read regression" can understand, for they comprise only a tiny fraction of the world's curious and intelligent people).
For the need-for-cogniton closure result, the authors don't report anything except that the relevant interaction term in an unreported regression model was non-significant. It is thus not possible to determine whether the effect of "need for closure" might have been meaningfully associated with aversion to engaging dissonant evidence & failed to achieve "statistical significance" due to lack of an adequately large sample.
These sorts of reporting problems are endemic to social psychology, where papers typically obsess over p-values & related test statistics & forgo graphic or other reporting strategies that make transparent the nature and strength of the inferences that the data support. But I've seen worse, and I don't think the reporting here is hiding some flaw in the BC study-- on the contrary, it is concealing the insight that one might derive from it!
The last thing I can think of to say--others should chime in-- is that is super unfortunate that BC, like NJV-B, relied on a Mechanical Turk "workforce" sample.
As I've written previously, selection bias, repeat exposure to cognitive style measures, and misrepresentations of nationality make MT samples an unreliable (invalid, I'd say) basis for testing hypotheses about the interaction of cognition and political predispositions.
So they should definitely not waste their time -- and their ingenuity -- on junky MT samples.