Question: Who is more disposed to motivated reasoning on climate change — hierarchical individualists or egalitarian communitarians? Answer: Both!


So it started innocently with a query from a colleague about whether the principal result in CCP’s Nature Climate Change study—which found that increased science comprehension (science literacy & numeracy) magnifies cultural polarization—might be in some way attributable to the “white male effect,” which refers to the tendency of white males to be less concerned with environmental risks than are women and nonwhites.

That seemed unlikely to me, seeing how the “white male effect” is itself very strongly linked to the extreme risk skepticism of white hierarchical individualist males (on certain risks at least).  But I thought the simple thing was just to plot the effect of increasing science comprehension on climate change risk perceptions separately for hierarchical and egalitarian white males, hierarchical and egalitarian females, and hierarchical and egalitarian nonwhites (individualism is uncorrelated with gender and race so I left it out just to make the task simpler).

That exercise generated one expected result and one slightly unexpected one. The expected result was that the effect of science comprehension in magnifying cultural polarization was clearly shown not to be confined to white males.

The less expected one was what looked like a slightly larger impact of science comprehension on hierarchs than egalitarians.

Actually, I’d noticed this before but never really thought about its significance, since it wasn’t relevant to the competing study hypotheses (viz., that science comprehension would reduce cultural polarization or that it would magnify it).

But it sort of fit the “asymmetry thesis” – the idea, which I associate mainly with Chris Mooney, that motivated reasoning is disproportionately concentrated in more “conservative” types (hierarchical individualists are more conservative than egalitarian communitarians—but the differences aren’t as big as you might think).

The pattern only sort of fits because in fact the “asymmetry thesis” isn’t about whether higher-level information processing (of the sort for which science comprehension is a proxy) generates greater bias in conservatives than liberals but only about whether conservatives are more ideologically biased, period.  Indeed, the usual story for the asymmetry thesis (John Jost’s, e.g.) is that conservatives are supposedly disposed to only heuristic rather than systematic information processing and thus to refrain from open-mindedly considering contrary evidence.

But here it seemed like maybe the data could be seen as suggesting that more reflective conservative respondents were more likely to display the fitting of risk perception to values—the signature of motivated reasoning.  That would be a novel variation of the asymmetry thesis but still a version of it.

In fact, I don’t think the asymmetry thesis is right.  I don’t think it makes sense, actually; the mechanisms for culturally or ideologically motivated reasoning involve group affinities generally, and extend to all manner of cognition (even to brute sense impressions), so why expect only “conservatives” to display it in connection with scientific data on risk issues like climate change or the HPV vaccine or gun control or nuclear power etc?

Indeed, I’ve now done one study—an experimental one—that was specifically geared to testing the asymmetry thesis, and it generated findings inconsistent with it: It showed that both “conservatives” and “liberals” are prone to motivated reasoning, and (pace Jost) the effect gets bigger as individuals become more disposed to use conscious, effortful information processing.

But seeing what looked like evidence potentially supportive of the asymmetry thesis, and having been attentive to avail myself of every opportunity to alert others when I saw what looked like contrary evidence, I thought it was very appropriate that I advise my (billions of) readers of what looked like a potential instance of asymmetry in my data, and also that I investigate this more closely (things I promised I would do at the end of my last blog entry).

So I reanalyzed the Nature Climate Change data in a way that I am convinced is the appropriate way to test for “asymmetry.”

Again, the “asymmetry thesis” asserts, in effect, that motivated reasoning (of which cultural cognition is one subclass) is disproportionately concentrated in more right-leaning individuals. As I’ve explained before, that expectation implies that a nonliner model—one in which the manifestation of motivated reasoning is posited to be uneven across ideology—ought to fit the data better than a linear one, in which the impact of motivated reasoning is posited to be uniform across ideology.

In fact, neither a linear model nor any analytically tractable nonlinear model can plausibly be understood to be a “true” representation of the dynamics involved.  But the goal of fitting a model to the data, in this context, isn’t to figure out the “true” impact of the mechanisms involved; it is to test competing conjectures about what those mechanisms might be.

The competing hypotheses are that cultural cognition (or any like form of motivated reasoning) is symmetric with respect to cultural predispositions, on the one hand, and that it is asymmetrically concentrated in hierarch individualists, on the other.  If the former hypothesis is correct, a linear model—while almost certainly not “true”—ought to fit better than a nonlinear one; likewise, while any particular nonlinear model we impose on the data will almost certainly not be “true,” a reasonable approximation of a distribution that the asymmetry thesis expects ought to fit better if the asymmetry thesis is correct.

So apply these two models, evaluate the relative fit of the two, and adjust our assessments of the relative likelihood of the competing hypotheses accordingly.  Simple!

Actually, the first step is to try to see if we can simply see the posited patterns in the data. We’ll want to fit statistical models to the data to test whether we aren’t “seeing things”—to corroborate that apparent effects are “really” there and are unlikely to be a consequence of chance.  But we don’t want to engage in statistical “mysticism” of the sort by which effects that are invisible are magically made to appear through the application of complex statistical manipulations (this is a form of witchcraft masquerading as science; sometime in the future I will dedicate a blog post to denouncing it in terms so emphatic that it will raise questions about my sanity—or I should say additional ones).

So consider this:

It’s a simple scatter plot of subjects whose cultural outlooks are on average both “egalitarian” and “communitarian” (green), on the one hand, and ones whose outlooks are on average “hierarchical” and “individualistic (black), on the other. On tope of that, I’ve plotted LOWESS or “locally weighted scatter plot smoothing” lines. This technique, in effect, “fits” regression lines to tiny subsegments of the data rather than to all of it at once.

It can’t be relied on to reveal trustworthy relationships in the data because it is a recipe for “overfitting,” i.e., treating “noisy” observations as if they were informative ones.  But it is a very nice device for enabling us to see what the data look like.  If the impact of motivated reasoning is asymmetric—if it increases as subjects become more hierarchical and individualistic—we ought to be able to see something like that in the raw data, which the LOWESS lines are now affording us an even clearer view of.

I see two intriguing things.  One is evidence that hierarch individualists are indeed more influenced—more inclined to form identity-congruent risk perceptions—as science comprehension increases: the difference between “low” science comprehension HIs and “high” ones is about 4 units on the 11-point risk-perception scale; the difference between “low” and “high” ECs is less than 2.

However, the impact of science comprehension is bigger for ECs than HIs at the highest levels of science comprehension. The curve slopes down but flattens out for HIs near the far right. For ECs, the effect of increased science comprehension is pretty negligible until one gets to the far right—the highest score on science comprehension—at which point it suddenly juts up.

If we can believe our eyes here, we have a sort of mixed verdict.  Overall, HIs are more likely to form identity-congruent risk perceptions as they become more science comprehending; but ECs with the very highest level of science comprehension are considerably more likely to exhibit this feature of motivated reasoning than the most science comprehending HIs.

To see if we should believe what the “raw data” could be see to be telling us, I fit two regression models to the data. One assumed the impact of science comprehension on the tendency to form identity-congruent risk perceptions was linear or uniform across the range of the hierarchy and individualist worldview dimensions.  The other assumed that it was “curvilinear”: essentially, I added terms to the model so that it now reflected a quadratic regression equation. Comparing the “fit” of these two models, I expected, would allow me to determine which of the two relationships assumed by the models—linear, or symmetric; or curvilinear, asymmetric—was more likely true.

Result: The more complicated polynomial regression did fit better—had a slightly higher R2 – than the linear one. The difference was only “marginally” significant (p = 0.06). But there’s no reason to stack the deck in favor of the hypothesis that the linear model fits better; if I started off with the assumption that the two hypotheses were equally likely, I’d actually be much more likely to be making a mistake to infer that the polynomial model doesn’t fit better than I would be to infer that it does when p = 0.06!

In addition, the model corroborates the nuanced story of the LOWESS-enhanced picture of the raw data.  It’s hard to tell this just from scrutinizing the coefficients of the regression output, so I’ve graphed the fitted values of the model (the predicted risk perceptions for study subjects) and fit “smoothed” lines to that (the lines consisted of gray zones, which corresponded to the values within the 0.95 confidence range).  You can see that the decrease in risk perception for HIs is more or less uniform as science comprehension increases, whereas for ECs it is flat but starts to bow upward toward the extreme upper bound of science comprehension. In other words, HIs show more “motivated reasoning” conditional on science comprehension overall; but ECs who comprehend science the “best” are most likely to display this effect.

What to make of this?

Well, not that much in my view!  As I said, it is a “split” verdict: the “asymmetric” relationship between science comprehension and the disposition to form identity-congruent risk perceptions suggests that each side is engaged in “more” motivated reasoning as science comprehension increases in one sense or another.

In addition, one’s interpretation of the output is hard to disentangle from one’s view about what the “truth of the matter” is on climate change.  If one assumes that climate change risks perceptions are lower than they should be at the sample mean, then HIs are unambiguously engaged in motivated reasoning conditional on science comprehension, whereas ECs are simply adjusting toward a more “correct” view at the upper range.  In contrast, if one believed that climate change risks are generally overstated, then one could see the results as corroborating that HIs are forming a “more accurate” view as they become more science comprehending, whereas ECs do not and in fact become more likely to latch onto the overstated view as they become most science comprehending.

I think I’m not inclined to revise upward or downward my assessment of the (low) likelihood of the asymmetry thesis on the basis of these data. I’m inclined to say we should continue investigating, and focus on designs (experimental ones, in particular) that are more specifically geared to generating clear evidence one way or the other.

But maybe others will have still other things to say.

Leave a Comment