follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "A sensitive matter" indeed! The science communication risks of climate model recalibration | Main | Is the culturally polarizing effect of science literacy on climate change risk perceptions related to the "white male effect"? Does the answer tell us anything about the "asymmetry thesis"?! »

Question: Who is more disposed to motivated reasoning on climate change -- hierarchical individualists or egalitarian communitarians? Answer: Both!


So it started innocently with a query from a colleague about whether the principal result in CCP’s Nature Climate Change study—which found that increased science comprehension (science literacy & numeracy) magnifies cultural polarization—might be in some way attributable to the “white male effect,” which refers to the tendency of white males to be less concerned with environmental risks than are women and nonwhites.

That seemed unlikely to me, seeing how the “white male effect” is itself very strongly linked to the extreme risk skepticism of white hierarchical individualist males (on certain risks at least).  But I thought the simple thing was just to plot the effect of increasing science comprehension on climate change risk perceptions separately for hierarchical and egalitarian white males, hierarchical and egalitarian females, and hierarchical and egalitarian nonwhites (individualism is uncorrelated with gender and race so I left it out just to make the task simpler).

That exercise generated one expected result and one slightly unexpected one. The expected result was that the effect of science comprehension in magnifying cultural polarization was clearly shown not to be confined to white males.

The less expected one was what looked like a slightly larger impact of science comprehension on hierarchs than egalitarians.

Actually, I’d noticed this before but never really thought about its significance, since it wasn’t relevant to the competing study hypotheses (viz., that science comprehension would reduce cultural polarization or that it would magnify it).

But it sort of fit the “asymmetry thesis” – the idea, which I associate mainly with Chris Mooney, that motivated reasoning is disproportionately concentrated in more “conservative” types (hierarchical individualists are more conservative than egalitarian communitarians—but the differences aren’t as big as you might think). 

The pattern only sort of fits because in fact the “asymmetry thesis” isn’t about whether higher-level information processing (of the sort for which science comprehension is a proxy) generates greater bias in conservatives than liberals but only about whether conservatives are more ideologically biased, period.  Indeed, the usual story for the asymmetry thesis (John Jost’s, e.g.) is that conservatives are supposedly disposed to only heuristic rather than systematic information processing and thus to refrain from open-mindedly considering contrary evidence.

But here it seemed like maybe the data could be seen as suggesting that more reflective conservative respondents were more likely to display the fitting of risk perception to values—the signature of motivated reasoning.  That would be a novel variation of the asymmetry thesis but still a version of it.

In fact, I don’t think the asymmetry thesis is right.  I don’t think it makes sense, actually; the mechanisms for culturally or ideologically motivated reasoning involve group affinities generally, and extend to all manner of cognition (even to brute sense impressions), so why expect only “conservatives” to display it in connection with scientific data on risk issues like climate change or the HPV vaccine or gun control or nuclear power etc?

Indeed, I’ve now done one study—an experimental one—that was specifically geared to testing the asymmetry thesis, and it generated findings inconsistent with it: It showed that both “conservatives” and “liberals” are prone to motivated reasoning, and (pace Jost) the effect gets bigger as individuals become more disposed to use conscious, effortful information processing.

But seeing what looked like evidence potentially supportive of the asymmetry thesis, and having been attentive to avail myself of every opportunity to alert others when I saw what looked like contrary evidence, I thought it was very appropriate that I advise my (billions of) readers of what looked like a potential instance of asymmetry in my data, and also that I investigate this more closely (things I promised I would do at the end of my last blog entry).

So I reanalyzed the Nature Climate Change data in a way that I am convinced is the appropriate way to test for “asymmetry.”

Again, the “asymmetry thesis” asserts, in effect, that motivated reasoning (of which cultural cognition is one subclass) is disproportionately concentrated in more right-leaning individuals. As I’ve explained before, that expectation implies that a nonliner model—one in which the manifestation of motivated reasoning is posited to be uneven across ideology—ought to fit the data better than a linear one, in which the impact of motivated reasoning is posited to be uniform across ideology.

In fact, neither a linear model nor any analytically tractable nonlinear model can plausibly be understood to be a “true” representation of the dynamics involved.  But the goal of fitting a model to the data, in this context, isn’t to figure out the “true” impact of the mechanisms involved; it is to test competing conjectures about what those mechanisms might be.

The competing hypotheses are that cultural cognition (or any like form of motivated reasoning) is symmetric with respect to cultural predispositions, on the one hand, and that it is asymmetrically concentrated in hierarch individualists, on the other.  If the former hypothesis is correct, a linear model—while almost certainly not “true”—ought to fit better than a nonlinear one; likewise, while any particular nonlinear model we impose on the data will almost certainly not be “true,” a reasonable approximation of a distribution that the asymmetry thesis expects ought to fit better if the asymmetry thesis is correct.

So apply these two models, evaluate the relative fit of the two, and adjust our assessments of the relative likelihood of the competing hypotheses accordingly.  Simple!

Actually, the first step is to try to see if we can simply see the posited patterns in the data. We’ll want to fit statistical models to the data to test whether we aren’t “seeing things”—to corroborate that apparent effects are “really” there and are unlikely to be a consequence of chance.  But we don’t want to engage in statistical “mysticism” of the sort by which effects that are invisible are magically made to appear through the application of complex statistical manipulations (this is a form of witchcraft masquerading as science; sometime in the future I will dedicate a blog post to denouncing it in terms so emphatic that it will raise questions about my sanity—or I should say additional ones).

So consider this:


It’s a simple scatter plot of subjects whose cultural outlooks are on average both “egalitarian” and “communitarian” (green), on the one hand, and ones whose outlooks are on average “hierarchical” and “individualistic (black), on the other. On tope of that, I’ve plotted LOWESS or “locally weighted scatter plot smoothing” lines. This technique, in effect, “fits” regression lines to tiny subsegments of the data rather than to all of it at once. 

It can’t be relied on to reveal trustworthy relationships in the data because it is a recipe for “overfitting,” i.e., treating “noisy” observations as if they were informative ones.  But it is a very nice device for enabling us to see what the data look like.  If the impact of motivated reasoning is asymmetric—if it increases as subjects become more hierarchical and individualistic—we ought to be able to see something like that in the raw data, which the LOWESS lines are now affording us an even clearer view of.

I see two intriguing things.  One is evidence that hierarch individualists are indeed more influenced—more inclined to form identity-congruent risk perceptions—as science comprehension increases: the difference between “low” science comprehension HIs and “high” ones is about 4 units on the 11-point risk-perception scale; the difference between “low” and “high” ECs is less than 2.

However, the impact of science comprehension is bigger for ECs than HIs at the highest levels of science comprehension. The curve slopes down but flattens out for HIs near the far right. For ECs, the effect of increased science comprehension is pretty negligible until one gets to the far right—the highest score on science comprehension—at which point it suddenly juts up.

If we can believe our eyes here, we have a sort of mixed verdict.  Overall, HIs are more likely to form identity-congruent risk perceptions as they become more science comprehending; but ECs with the very highest level of science comprehension are considerably more likely to exhibit this feature of motivated reasoning than the most science comprehending HIs.

To see if we should believe what the “raw data” could be see to be telling us, I fit two regression models to the data. One assumed the impact of science comprehension on the tendency to form identity-congruent risk perceptions was linear or uniform across the range of the hierarchy and individualist worldview dimensions.  The other assumed that it was “curvilinear”: essentially, I added terms to the model so that it now reflected a quadratic regression equation. Comparing the “fit” of these two models, I expected, would allow me to determine which of the two relationships assumed by the models—linear, or symmetric; or curvilinear, asymmetric—was more likely true.

click me --please! Please!Result: The more complicated polynomial regression did fit better—had a slightly higher R2 – than the linear one. The difference was only “marginally” significant (p = 0.06). But there’s no reason to stack the deck in favor of the hypothesis that the linear model fits better; if I started off with the assumption that the two hypotheses were equally likely, I’d actually be much more likely to be making a mistake to infer that the polynomial model doesn’t fit better than I would be to infer that it does when p = 0.06!

In addition, the model corroborates the nuanced story of the LOWESS-enhanced picture of the raw data.  It’s hard to tell this just from scrutinizing the coefficients of the regression output, so I’ve graphed the fitted values of the model (the predicted risk perceptions for study subjects) and fit “smoothed” lines to that (the lines consisted of gray zones, which corresponded to the values within the 0.95 confidence range).  You can see that the decrease in risk perception for HIs is more or less uniform as science comprehension increases, whereas for ECs it is flat but starts to bow upward toward the extreme upper bound of science comprehension. In other words, HIs show more “motivated reasoning” conditional on science comprehension overall; but ECs who comprehend science the “best” are most likely to display this effect.

What to make of this? 

Well, not that much in my view!  As I said, it is a “split” verdict: the “asymmetric” relationship between science comprehension and the disposition to form identity-congruent risk perceptions suggests that each side is engaged in “more” motivated reasoning as science comprehension increases in one sense or another.

In addition, one’s interpretation of the output is hard to disentangle from one’s view about what the “truth of the matter” is on climate change.  If one assumes that climate change risks perceptions are lower than they should be at the sample mean, then HIs are unambiguously engaged in motivated reasoning conditional on science comprehension, whereas ECs are simply adjusting toward a more “correct” view at the upper range.  In contrast, if one believed that climate change risks are generally overstated, then one could see the results as corroborating that HIs are forming a “more accurate” view as they become more science comprehending, whereas ECs do not and in fact become more likely to latch onto the overstated view as they become most science comprehending.

I think I’m not inclined to revise upward or downward my assessment of the (low) likelihood of the asymmetry thesis on the basis of these data. I’m inclined to say we should continue investigating, and focus on designs (experimental ones, in particular) that are more specifically geared to generating clear evidence one way or the other.

But maybe others will have still other things to say.


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (7)

Dan..went back to review your Nature Climate Change paper again to get a better feel for the underlaying data you are using for these last posts. The questions and answers left me scratching my head.

On pg 4, Did only 12% get the correct answer for the cost of the bat and ball? Really ?

For the science questions on page 5, the questions seemed way to lightweight to get a wide variation. Finishing this as a test for a passing grade just means you are alive and have had a minimum education.

Combining a lightweight Sciliteracy with a seeming reasonable Numeracy seems to give an exaggerated overall score. Would this matter?

The one item that jumped out at me is the observation that the slope is about the same for the HI between CC and Nuclear where the slope makes a major change for EC between these. I find this difference interesting.

Anyway...good post with lots to think about.

April 1, 2013 | Unregistered CommenterEd Forbes

@Ed: Thanks.

Yes, bat & ball fools a lot of people. It is part of Cognitive Reflection Test (CRT) subcomponent. CRT tries to measure how likely people are to catch themselves when there is an intuitive but incorrect answer to a problem that should be solved w/ logical or analytical thought. It is good predictor of vunlerability to cognitive biases of various sorts. 5th grade algebra is enough for bat & ball but most people never even pause to think why intuitive response isn't right. In Ideology & Cognitive Reflect experiement, only 13% of national sample got it right.

I agree the science literacy battery isn't great. It comes from National Science Foundation & is standard for research in this area, but I hope it gets overhauled. We combined that measure w/ the Numeracy scale; the two did cohere (scale) nicely & didn't like "education level" as much as they liked each other, which suggests that resulting "science literacy/numeracy" measure was getting at something distinct from "how educated" subjects were, something more science- or quantiative-reasoning related. if we hadn't had the Numeracy scale in the study, we'd have gotten the same result but I would have myself would have found the result less informative. Likely the Numeracy scale did contribute *more* to the variance measured by the aggregated science literacy/numeracy scale, but science literacy still helpe to generate more variance overall, so it was a net positive.

You are right that ECs, as they become more numerate, become less concerned about nuclear -- but not nearly to the same extent as HIs, so w/ nuclear risks too cultural polarization is larger among those w/ highest degrees of "science comprehension." You are right that trying to make sense of the difference between EC & HI that I featured for climate should be evaluated in light of the nuclear power finding too.

April 1, 2013 | Registered CommenterDan Kahan

"..seeing how the “white male effect” is ..."

Is there a particular word for this new label? Referring to someone as white is a racist term, referring to them as male is sexist, so this term is 'racist-sexist'. Doesn't really roll off the tongue easily.

Can anyone here come up with a cool term to refer to this form of discriminatory labeling?

Racist-sexism just doesn't flow. Need some help with this one.


April 2, 2013 | Unregistered Commenterklem

@Klem: Likely "bigot" would work, except I have to admit, I don't find the "white male effect" to be offenseive. True, I used the term; but I'm also a white male.
But if one decided that for whatever reason, one shouldn't say "white male effect," what do you think would be a useful alternative label? There is a phenomeon, it warrants study, and therefore it needs the convenience of a common term to facilitate discussion.

April 2, 2013 | Registered CommenterDan Kahan

I don't think you should worry about it. The key in this sort of study is just to make sure you don't find anything that could be construed as superior in such an "effect".

April 2, 2013 | Unregistered CommenterLarry

Thanks for the article. Forgive me if I'm missing something but isn't there a problem that there is an imbalance on the Y axis, in that there may in fact be a genuine risk in Climate Change? Let's say we ran this same study against perceptions of risks for heavy smoking. If we got exactly the same result, would we conclude that both types of people were engaging in near equivalent motivated reasoning?
We know that education is not protective against motivated reasoning and can indeed make it easier to convince oneself of one's conclusions - but when there is good evidence for a proposition, such as the risk of heavy smoking, then one group is engaging in motivated reasoning and the other is understanding the evidence.
So perhaps the highly science literate ECs can engage with the complex climate change literature, and are thus even more convinced than baseline for the EC archetype, whereas the literate HIs use their literacy to better achieve motivated cognitive goals?

November 1, 2013 | Unregistered CommenterMark

I was going to post a new question, but it turns out it's an old question... The same thing struck me as struck Mark (directly above me).

I am fairly scientifically literate (final year in a BSc., Psychology), and I've read a lot of the information, and watched a number of documentaries, both for and against. Not only is the information for Anthropogenic climate change highly credible, it makes more sense to act even if it's not our fault, as we need to be prepared.

Thanks in advance


February 6, 2015 | Unregistered CommenterAlan Duval

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>