Are people more conservative when “primed for reflection” or when “primed for intuition”? Apparently both . . . . (or CRT & identity-protective reasoning Part 2^8)

1.  “The obvious reason people disagree with me is because they just can’t think clearly! Right? Right??” Well, I don’t think so, but I could be wrong

As the 14 billion readers of this blog know, I’m interested in the relationship between cognition and political outlooks. Is there a connection between critical reasoning dispositions and left-right ideology? Does higher cognitive proficiency of one sort or another counteract the tendency of people to construe empirical data in a politically biased way?

The answer to both these questions,  the data I’ve collected persuades me, is, No.

But as I explained just the other day, if one gets how empirical proof works, then one understands that any conclusion one comes to is always provisional. What one “believes” about some matter that admits of empirical inquiry is just the position one judges to be most supported by the best available evidence now at hand.

2.  New evidence that liberals are in fact “more reflective” than conservatives?
So I was excited to see the paper “Reflective liberals and intuitive conservatives: A look at the Cognitive Reflection Test and ideology,” Judgment and Decision Making, July 2015, pp. 314–331, by Deppe, Gonzalez, Neiman, Jackson Pahlke, Smith & Hibbing.

Deppe et al. report the results from a number of studies on critical reasoning and political ideology.  The one that got my attention was one in which Deppe et al. reported that they had found “moderately sized negative correlations between CRT scores and conservative issue preferences” in a “nationally representative” sample” (pp. 316, 320).

As explained 9,233 times on this blog, the CRT is the standard assessment instrument used to measure the disposition of individuals to engage in effortful, conscious “System 2” information processing as opposed to the intuitive, heuristic “System 1” sort associated with myriad cognitive biases (Frederick 2005).

It was really really important, Deppe et al. recognized, to use a stratified general population sample recruited by valid means to test the relationship between political outlooks and CRT.

Various other studies, they noted, had relied on samples that don’t support valid inferences the relationship between cognitive style and political outlooks. These included M Turk workers, whose scores on the CRT are unrealistically high (likely b/c they’ve been repeatedly exposed to it); who underrepresent conservatives, and thus necessarily include atypical ones; and who often turn out to be non-Americans disguising their identities (Chandler,  Mueller, & Paolacci 2014; Krukpnikov & Levine 2014; Shapiro,Chandler, & Mueller 2013).

Other scholars, Deppe et al. noted, have constructed samples from “visitors to a web site” on cognition and moral values who were expressly solicited to participate in studies in exchange for finding out about the relationship between the two in themselves.  As a reflective colleague pointed out, this not particularly reflective sampling method is akin to polling visitors to try to figure out what the frequency of “liking football” is among different groups in the general population.

The one study Deppe et al. could find that used a valid general population sample to examine the correlation between CRT scores and right-left political outlooks was one I had done (Kahan 2013).  And mine, they noted, had found no meaningful correlation.

Deppe et al. attributed the likely difference in our results to the way in which they & I measured political orientations.  I used a composite measure that combined responses to standard, multi-point conservative-liberal ideology and party self-identification measures.  But  “self-reported ideology,” they observed, “is well-known to be a highly imperfect indicator of individual issue preferences.”

So instead they measured such preferences, soliciting their subjects responses to a variety of specific policies, including gay marriage, torture of terrorist subjects, government health insurance, and government price controls (a goody but oldie; “liberal” Richard Nixon was the last US President to resort to this policy).

On the basis of these responses they formed separate “Economic,” “Moral,” and  “Punishment” “conservative policy-preference” scales.  The latter two, but not the former, had a negative correlation with CRT, as did a respectably reliable scale (α =0.69) that aggregated all of these positions.

Having collected data from a Knowledge Networks sample “to determine if the findings” they obtained with M Turk workers “held up in a more representative sample” (p. 319), they heralded this result as  “offer[ing] clear and consistent support to the idea that liberals are more likely to be reflective compared to conservatives.”

That’s pretty interesting!

So I decided I should for sure to take the study into account in my own perpetual weighing of the evidence on how critical reasoning relates to political outlooks and comparable indicators of cultural identity.

I downloaded their data from JDM website with the intention of looking it over and then seeing if I could replicate their findings with nationally representative datasets of my own that had liberal and conservative policy positions and CRT scores.

Well, I was in fact able to replicate the results in the Deppe et al. data.

However, what I ended up replicating were results materially different from what Deppe et al. had  actually reported. . . .

3.  Unreported data from a failed “priming” experiment: System 2 reasoners get more conservative when primed to be “reflective” and when primed to be “intuitive”!

Deppe et al. had collected their CRT and political-position data as part of a “priming” experiment.  The idea was to see if subjects’ political outlooks became more or less conservative when induced or Full results from TESS/Knowledge Networks sample (study 2). Click to inspect–very strange indeed!“primed” to rely either on “reflection,” of the sort associated with System 2 reasoning, or on “intuition,” of the sort associated with System 1.

They thus assigned 2/3 of their subjects randomly to distinct “reflection” and “intuition” conditions. Both were given word-unscrambling puzzles that involved dropping one of five words and using the other four to form a sentence.  The sentences that a person could construct in the “reflection” condition emphasized use of reflective reasoning (e.g., “analyze the numbers carefully”; “I think all day”), while those in the “intuition” condition emphasized the use of intuitive” reasoning (e.g., “Go with your gut”; “she used her instinct”).

The remaining 1/3 of the sample got a “neutral prime”: a puzzle that consisted of dropping and unscrambling words to form statements having nothing to do with either reflection or intuition (e.g., “the sky is blue”; “he rode the train”).

Deppe et al.’s hypothesis was that “subjects receiving an intuitive prime w[ould] report more conservative attitudes” and those  “receiving a reflective prime . . . more liberal attitudes,” relative to “those receiving a “neutral prime.”

Well, the experiment didn’t exactly come out as planned.  Statistical analyses, they reported  (p. 320),

show[ed] no differences in the number of correct CRT answers provided by the subjects between any group, indicating that the priming protocol manipulation . . . failed to induce any higher or lower amounts of reflection. With no differences in thinking style, again unsurprisingly, there were no statistically significant differences between the groups on self-reported ideology  or issue attitudes.

But I discovered that the results were actually way more interesting that!

There may have been “no differences” in the CRT scores and “conservative issue preferences” of subjects assigned to different conditions, but it’s not true there were no differences in the correlation between these two variables in the various conditions: in both the “reflection” and “intuition” conditions, subjects scoring higher on the CRT adopted “significantly” more conservative policy stances than their counterparts in the “neutral priming” condition! By the same token, subjects scoring lower in CRT necessarily became more liberal in their policy stances in the “reflection” & “intuition” conditions.

Wow!  That’s really weird!

If one took the experimental effect seriously, one would have to conclude that priming individuals for “reflection” makes those who are the most capable and motivated to use System 2 reasoning (the conscious, effortful, analytic type) become more conservative–and that priming these same persons for “intuition” makes them more conservative too!

4.  True result in Deppe et. al: “more representative sample” fails to “replicate” negative correlation between conservative policy positions and CRT!

Deppe et al. don’t report this result.  Likely they concluded, quite reasonably, that this whacky, atheoretical outcome was just noise, and that the only thing that mattered was that the priming experiment just didn’t work (same for the ones they attempted on M Turk workers, and same for a whole bunch of “replications” of classic studies in this genre).

But here’s the rub.

The “moderately sized negative correlation[] between CRT scores and conservative issue preferences overall” that Deppe et al. report finding in their “nationally representative” sample (p. 319) was based only on subjects in the “neutral prime” condition.

As I just explained, relative to the “neutral priming” condition, there was a positive relationship “between CRT scores and conservative issue preferences overall” in both the “reflection” and “intuition priming” conditions.

If Deppe et al. had included the subjects from the latter two conditions in their analysis of the results of study 2, they wouldn’t have detected any meaningful correlation –positive or negative—“between CRT scores and conservative issue preferences overall” in their critical “more representative sample.

It doesn’t take a ton of reflection to see why, under these circumstances, it is simply wrong to characterize the results in study 2 as furnishing “correlational evidence to support the hypothesis that higher CRT scores are associated with being liberal.”

For purposes of assessing how CRT and conservatism relate to one another, being assigned to the “neutral priming” condition was no more or less a “treatment” than being assigned to the “intuition” and “reflection” conditions.  The subjects in the “neutral prime” condition did a word puzzle—just as the subjects in the other treatments did.  Insofar as the experimental assignment didn’t didn’t generate “differences in the number of correct CRT answers” or in “issue attitudes” between the conditions (p. 320), then either no one was treated for practical purposes or everyone was but in the same way: by being assigned to do a word puzzle that had no effect on ideology or CRT scores.

Of course, the correlations between conservative policy positions and CRT did differ between conditions.  As I pointed out, Deppe et al. understandably chose not to report that their “priming” experiment had “caused” individuals high in System 2 reasoning capacity to become more conservative (and those low in System 2 reasoning correspondingly more liberal) both when “primed” for “reflection” and when “primed” for intuition.  The more sensible interpretation of their weird data was that the priming manipulation had no meaningful effect on either conservativism or CRT scores.

But if one takes that very reasonable view, then it is unreasonable to treat the CRT-conservatism relationship in the “neutral priming” condition as if it alone were the “untreated” or “true” one.

If the effects of experimental assignments are viewed  simply as noise—as I agree they should be!—then the correct way to assess the relationship between CRT & conservatism in study 2 is to consider the responses of subjects from all  three conditions.

An alternative that would be weird but at least fully transparent would be to say that “in 2 out of 3 ‘subsamples,’ ” the “more representative sample” failed to “replicate” the negative conservative-CRT correlation observed in their M Turk samples.

But the one thing that it surely isn’t justifiable is to divide the sample into 3 & then report the data from the one subsample that happens to support the authors’ hypothesis — that conservatism & CRT are negatively correlated — while simply ignoring the contrary results in the other two.

I’m 100% sure this wasn’t Deppe et al.’s intent, but by only partially reporting the data from their “nationally representative sample” Deppe et al. have unquestionably created a misimpression.  There’s just no chance any reader would ever have guessed that the data looked like this given their description of the results—and no way a reader apprised of the real results would ever agree that their “more representative sample” had “replicated” their M Turk sample finding of a “negative correlation[] between CRT scores and conservative issue preferences overall” (p. 320).

5. Replicating Deppe et. al.

As I said, I was intrigued by Deppe et al.’s claim that they had found a negative correlation between conservative policy positions and CRT scores and wanted to see if I could replicate their finding in my own data set.

It turns out their study didn’t find the negative correlation they reported, though, when one includes responses of the 2/3 of the subjects unjustifiably omitted from their analysis of the relationship between CRT scores and conservative policy positions.

Well, I didn’t find any such correlation either when I performed a comparable data analysis on a large (N = 1600) nationally representative CCP (YouGov) study sample from 2012—one in which subjects hadn’t been assigned to do any sort of word-unscrambling puzzle before taking the CRT.

In my sample, subjects responded to this “issues positions” battery:

The responses formed two distinct factors, one suggesting a disposition to support or oppose legalization of prostitution and legalization of marijuana, and the other a disposition to support or oppose liberal policy positions on the remaining issues except for resumption of the draft, which loaded on neither factor.

Reversing the signs of the factor scores, I suppose one could characterize these as “social” and “economic_plus” conservativism respectively .

Both had very very small but “significant” correlations with CRT.

bivariate correlations between CRT and “conservative overall” and subdomains in nationally representative CCP/YouGov sample. Z_conservrepub is composite scale comprising liberal-conservative ideology and partisan self-id (α = 0.82).But the signs were in opposing directions:  Economic_plus: r =  0.06, p < 0.05; and Social, r = -0.14, p < 0.01.

Not surprisingly, then, these two canceled each other out (r = -0.01, p = 0.80) when one examined “conservative policy positions overall”—i.e., all the policy positions aggregated into a single scale (α = 0.80).

That is exactly what I found, too, when I included the 2/3 of the subjects that Deppe et al. excluded from their report of the correlation between CRT and conservative policy positions in Study 2.  That is, if one takes their conservative subdomain scales as Deppe et al. formed them, there is a small negative correlation between CRT and “Punishment” conservativism ( r = -0.13, p < 0.01) but a small positive one (r = 0.17, p < 0.01) between CRT and “Economic conservativism.”

There is another, even smaller negative correlation between CRT and the “Moral” conservative policy position scale (r = – 0.08, p = 0.08).

Overall, these tiny correlations all wash out (“conservative issue preferences overall”: r = -0.01, p = 0.76).

That—and not any deficiency in conventional left-right ideology measures (ones routinely used by the “neo-authoritarian personality” scholars (Jost et al 2003) that Deppe et al. cite their own study as supporting)— also explains why there is zero correlation between CRT and liberal-conservative ideology and partisan self-identification.

In any event, when one  simply looks at all the data in a fair-minded way, one is left with nothing—and hence nothing that supplies anyone with any reason to revise his or her views on the relationship between political outlooks and critical reasoning capacities.

  1. Yucky NHT–again

One last point, again on the vices of “null hypothesis testing.”

Because they were so focused on their priming experiment non-result, I’m sure it just didn’t occur to Deppe et al. that it made no sense for them to exclude 2/3 of their sample when computing the relationship between conservativism and CRT scores in Study 2.

But here’s something I think they really should have thought a bit more about. . . . Even if the results in their study were exactly as they reported, the correlations were so trivially small that they could not, in my view, reasonably support a conclusion so strong (not to mention so clearly demeaning for 50% of the U.S. population!) as

We find a consistent pattern showing that those more likely to engage in reflection are more likely to have liberal political attitudes while those less likely to do so are more likely to have conservative attitudes….

…The results of the studies reported above offer clear and consistent support to the idea that liberals are more likely to be reflective compared to conservatives….

I’ll say more about that “tomorrow,” when I return to a theme briefly touched on a couple days ago on the common NHT fallacy that statistical “significance” conveys information on the weight of the evidence in relation to a study hypothesis.


Chandler, J., Mueller, P. & Paolacci, G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior research methods 46, 112-130 (2014).

Deppe, K.D., Gonzalez, F.J., Neiman, J.L., Jacobs, C., Pahlke, J., Smith, K.B. & Hibbing, J.R. Reflective liberals and intuitive conservatives: A look at the Cognitive Reflection Test and ideology. Judgment and Decision Making 10, 314-331 (2015).

Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Jost, J.T., Glaser, J., Kruglanski, A.W. & Sulloway, F.J. Political Conservatism as Motivated Social Cognition. Psych. Bull. 129, 339-375 (2003).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014).

Shapiro, D.N., Chandler, J. & Mueller, P.A. Using Mechanical Turk to Study Clinical Populations. Clinical Psychological Science 1, 213-220 (2013).

Leave a Comment