follow CCP

Recent blog entries
« A multivariate regression analysis of CRT performance & ideology, plus a preliminary diatribe against mindlessly overspecified regression models | Main | Yow--lots of great comments on "science literacy vs. climate-change science literacy" »
Wednesday
Dec122012

Query on climate change, culture, ideology & identity-protective cognition

A thoughtful person asks:

I’ve come across your work while trying to make sense of climate change denial. I find your analysis very interesting (along with Flynn et al. 1994, Finucane et al. 2007, and McCright and Dunlap 2011) because it offers a compelling explanation for what seems like a curious social dynamic.

However, there’s something I don’t quite follow, and with your kind forbearance I hope I may ask you a question. Your 2007 Journal of Empirical Legal Studies paper sketches out a synthesis of cultural risk perception and identity-protective cognition. From that I was expecting to see how group identity (conservative, Republican) and world vision (hierarchical, individualistic) somehow mutually reinforced each other in the climate change arena. In fact, though, that seems not to be the case. Instead it is the hierarchical and individualistic world vision itself that cognition seeks to protect. Indeed, your regression 4 in Table 2 (p. 483) if anything seems inconsistent with my expectation, since both "Conservative" and "Democrat" are highly significant, whereas if these substantially overlapped with the hierarchical white male dummy (so to speak), the coefficients would have been insignificant. Is your view that Conservative and Democrat are independent (from the white male effect) determinants of views on climate change? The narrative in McCright and Dunlap directly linking climate change skepticism to conservatism appeals to me, but I'm not sure if it's consistent with your own perceptions and/or findings.

Thanks very much, and thanks for your very interesting paper.

This is my response. Anyone want to add anything? 

I believe that people have unobservable latent predispositions that they acquire as a result of one or another social influence. The thing to do is find observable indicators that one has good reason to believe correlate with those dispositions, combine them into reliable scales, and use those measures to test hypotheses about who sees what & why, & about what sorts of communication strategies are geared to promoting open-minded engagement with information by people of diverse predispositions.  "Republican, Democrat, liberal, conservative, hierarch, egalitarian, individualist," etc are all candidate indicators. Which ones to combine to form scales depends on which latent-variable measurement strategy most instructively enables explanation, prediction & prescription.  

For more info, click on links below; & let me know if you have additional questions or if you have reactions, comments etc.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (10)

I don't think we've got to the point of understanding mechanisms and explanations. We're still in the stage of looking for interesting correlations to try to narrow down where to look. I agree based on these results that cultural worldview probably has something to do with it, but I'm not yet convinced of the 'identity protective' mechanism - even though I'm reversing focus, and trying to understand why some people come to believe in global warming catastrophe without evidence.

"I’ve come across your work while trying to make sense of climate change denial."

Have you tried talking to sceptics, and asking them?

If you're polite and don't call them 'deniers' they'll generally be happy to explain their reasoning.

"Instead it is the hierarchical and individualistic world vision itself that cognition seeks to protect."

Not really. All we can say is that the cultural axes are a better proxy for the real mechanisms than the political axes, probably because it's to do with the prevalence of different styles of reasoning/mechanisms of belief formation.

Incidentally, don't keep thinking of this as a one-sided effect. The egalitarian-communitarian world vision are also identity-protective, and can just as easily be offered as an explanation for their belief.

"The narrative in McCright and Dunlap directly linking climate change skepticism to conservatism appeals to me"

The narrative in McCright and Dunlap has all sorts of problems, primary among them the lack of any tight reasoning linking the correlations they observed with their interpretations and speculations. It seemed more like an exercise in politically-motivated wishful thinking. I can give a more detailed critique if you like, but it would take me some time.

McCright and Dunlap do however ask a very good set of questions for measuring belief - much better than the generic 'Do you believe in global warming?' one. I'd recommend adopting (and extending) them.

However, the most critical missing data in all of these studies is to ask why people believe as they do, not just what they believe. It's not the whole story - people are not generally aware of their own cognitive biases - but classifying the arguments, justifications, reasoning methods, logical fallacies, and so on may give further clues as to the actual mechanism, and the correlation between mental methods and cultural worldview.

Because setting aside a relatively small number of scientific/technical types on both sides, the vast majority of people holding opinions do not understand climate science well enough to come to an informed scientific judgement. You 'believe' in climate change, but you don't know in detail what the evidence is or how reliable it is, you don't understand the physics of how the greenhouse effect is actually supposed to work let alone how it interacts with feedbacks and other atmospheric dynamics, in many cases you don't even know what the technical argument between sceptical and consensus scientists is about. So how did you come to your conclusion? And how is it you're so certain? And why are you so contemptuous of those out-group people with an identical lack of technical knowledge, who happen to come to a different conclusion? This is clearly the domain of culture, not scientific understanding.

One that I've already noted, and which may have a connection to the individualist-communitarian axis is respect for arguments from authority. Individualists do not trust authority, and do not defer to it. And yet a lot of arguments for believing that I've seen rely on it. The 'argument ad verecundiam' respect for scientists, and peer review, and eminent professional bodies, and governments. The very concept of 'following the consensus' is inherently communitarian!

Communitarians simply cannot understand why this argument is not totally convincing, and individualists cannot understand how anybody could be convinced by it. It's one possibility for a mechanism by which cultural worldview can translate into a difference of opinion.

Even if you don't accept that (and I have no more than my own personal experience to support it), finding out whether techniques like appealing to authority (and what sort of authority) do or don't work on various target groups is surely a useful thing to know. Don't you think?

December 13, 2012 | Unregistered CommenterNiV

Prof. Kahan: Thanks very much for your very helpful explanation and the introduction to the CCP blog which I’ll continue to watch.

If I understand correctly, what you’re saying here (and in the links) is that analytically there’s no need to distinguish between latent and observable variables. What matters is explanatory power. That makes sense for a data reduction technique. But it raises another question. Identity-protective cognition seems like a structural concept – a conjecture that people more or less voluntarily process information in such a way as to protect the interests of groups they’re affiliated with. For instance, people who identify with Republicans adopt Rush Limbaugh’s or James Inhofe’s view about climate change because they perceive this to be the Republican worldview. But it’s not clear people are aware of any kinship to others they cluster near in the space of latent variables, in this case those who hold individualistic-hierarchical world views. Yet that seems central to the role you assign to identity-protective cognition in your 2007 paper, e.g., “By supplying a psychological mechanism rooted in individuals' perceptions of their own interests, identity-protective cognition extricates cultural theory from the well-known difficulties that plague functionalist accounts.” (p. 471).

December 13, 2012 | Unregistered CommenterRobert Keyfitz

@Robert & @NiV-- great comments/responses, thanks. I've posted portions of your comments & my responses to them in the "followup" field for the post to try to make it more likely others who would find them of interest will see them & also be motivated to join discussion. By all means feel free to say more.

December 13, 2012 | Registered CommenterDan Kahan

NiV: Many thanks for your obsevations.

Have I tried talking to skeptics? Indeed I have. What started me on my quest for understanding was a long and increasingly pointless exchange (mostly I was a bystander), between skeptics and believers on a web site I follow. The two sides just shouted past each other, the believers not understanding the psychology of the skeptics any better than the skeptics understood the science of the believers. Here’s what I think they would say… They’d refuse to admit that a scientific consensus exists about global warming; question whether it’s possible to measure global temperature in the first place; cite deliberate falsification of results by the CRU and IPCC to accuse believers of dishonesty and manipulation; point out climate change has occurred since long before the industrial revolution and therefore it has nothing to do with human activity; and argue that the impacts will be relatively small and that technological solutions will be found to mitigate them. Which is to say, they think the whole thing is humbug. It’s a well crafted view that’s hard to attack in terms they will accept. You’re quite right, the same is true of climate change believers who unite around science, which is equally a social construct. In a sense these are non-nested views of the world which are difficult to discriminate empirically. Indeed, I’m not sure the skeptics’ view has any empirical implications at all in the near term.

Re: McCright and Dunlap. Probably as an economist I’m less bothered by a lack of tight reasoning linking observed correlations with interpretations and speculations. Economists usually start out with a speculation and try to falsify it with correlations. If they fail, voilà....a working hypothesis! I’m pretty sure conservative, white, male Republicans are overrepresented on my web site, and they’re probably encouraged to say what they do by the cover they get from James Inhofe. As I indicated in my post below yours and a question to Prof. Kahan, what confuses me at this point is the role of identity-protective cognition in the story, since individualist-hierarchical doesn’t seem like an identity most people would be aware of let alone want to protect. Other than that, it seems plausible enough that there’s a consensus among skeptics reflecting an inherent acceptance of environmental risks, reinforced by a group (also predominantly white male) of conservative Republicans conveniently supplied with identity-protective talking points by a special interest elite. In any case, that would be consistent with the regression results in the Journal of Empirical Legal Studies article.

December 13, 2012 | Unregistered CommenterRobert Keyfitz

@NiV & @Robert: Likely I am in between you, or off at some angle in relation to both, on correlational studies vs. experiments. I think they are fine but limited. That's what I think of experiments too.

The value of any sort of empirical evidence consists in the strength of the inference it supports. Observational or correlational data will support a causal inference whenever the observed correlation is more consistent with the asserted causal mechanism than w/ some other explanation.

But maybe something else that you didn't observe caused the relationship. So do an experiment.

The experiment supports a causal inference if the manipulation generates the expected outcome; but for it to support an inference that what caused the effect was the mechanism that one hypothesized, the design has to be such that that mechanism is more plausible than alternative ones. But maybe something else actually generated the effect -- no experimental result ever is uniquely determined by a single mechanism..

So do another experiment; or even another observational study. As results accumulate that are consistent with the hypothesized causal mechanism, the likelihood that "something else" is causing those results gets increasing small, so long as the designs were good.

What else can you do?

All the studies I cited are in that nature. All of them had in mind "identity-proective cognition would predict this in this experiment -- and it's hard to think of other things that would..."

The usual problem w/ observational studies-- particularly ones that use multivariate regression models with a large number of independent variables all piled onto the right-hand side -- is not that they are correlational. It is that they are undertheorized and thus don't support very strong inferences.

This is particularly likely to happen if one is hypothesizing that a latent variable is "correlated" with and thus likely "causing" some outcome. If multiple indicators are treated as *independent variables,* the coefficients for those variables are *not* a reliable or valid measure of the influence of the latent variable; indeed, the covariance that was *partialed* out was a better measure of the latent variable than whatever is left in the model. If the mechanism involves a latent variable, indicators have to be combined into a valid & reliable latent-variable measure.

In my experience, economists are the social scientists most likely not to used "overspecifie" regression models.

December 13, 2012 | Registered CommenterDan Kahan

Thanks for the response.

a) This paper fixes one of the issues I had noted, by controlling the information to which the subject is responding. Prior to giving them the balanced information, they didn't know anything about nanotechnology and had similar positions on the risks. (If I'm interpreting the right-hand panel of figure 1 correctly.) Thus, they can only be responding to the information provided, not to anything they may have heard previously (except possibly by analogy or loose association).

That doesn't show that the response in the climate change vs cultural worldview experiment isn't contaminated by previously seen evidence, but it does demonstrate that such an effect exists (bar analogy with similar cases). Given that contamination on such an issue is unavoidable, I think this is probably the best that can be done.

I note that the question asked in this paper asks the subject to weigh risks rather than assess facts. That's something I'd expect to be more affected by cultural values.

I'm also still not sure it really addresses the question of the mechanism. OK, so non-HI people are more technologically risk-averse. Why? How does that relate to their HI-ness? Is it contingent - could the cultural stereotypes be the other way round; it was just a historical accident that they're this way? Or will HI people always be less risk-averse?


b) This paper appeared to be paywalled, and I could only see the first couple of pages. So I won't comment on it further.

c) Again, with the experimental intervention, it seemed unclear to me whether people assessed an expert who agreed with them as more reliable because he confirmed their worldview, because they believe most experts hold views consonant with their own, or because he made statements they 'knew' to be accurate, based on their own beliefs. Or possibly even because they used styles of argument that fitted better with their own ways of thinking.

Although looking at the six book extracts in figure 2, I doubt that last. The first pair put an argument from authority up against citation of data, the second pair put an appeal to uncertainty against a single example. The third pair put a causal argument, a reference to the conclusions of data without the data, and a cost estimate up against one another. The first one, maybe. (Although neither argument seemed very expert to me.) But the other two are either comparably weak arguments or made in the same style.

It's also unfortunate that the questions were ambiguous.
"Global temperatures are increasing" - does that mean over the last 10 years or the last 40 years? Or the last few months, or the last thousand years?
"Human activity is causing global warming" - Does that mean human activity is contributing some global warming, or human activity is causing all (or most) of it? And doesn't this depend on the answer to the first question?

It would also have been interesting to know if subjects really knew how experts were divided (e.g. on climate change surveys put it at about 85% to 10% for and against consensus on human causes) or whether they were going by gut feel.

--

We have noted some common themes. The response of the Hierarchical-Individualists to nanotechnology was predictable, even though the subjects didn't initially know anything about nanotechnology, and hence couldn't know what the position of the group with which they identify was on it. That suggests that either it's either not a response to a perceived (and arbitrary) cultural identity, or the shared cultural position is something more abstract, like technological development is beneficial versus technological exploitation is risky. And I'm not convinced of the latter - Egalitarians and Communitarians are as keen on their iPhones as the rest of us! The cultural response is predictable, systematic, and only triggered in some situations and not others. There's something more to it - some underlying mechanism, besides fitting in.

We might know where to start looking, but I'm still not sure we know what we're looking for.

But this conversation has been fascinating. Please do keep trying to convince me.

--

"All of them had in mind "identity-proective cognition would predict this in this experiment -- and it's hard to think of other things that would...""

Indeed. What I was trying to do was think of other things that would. I'd like to think that was a useful thing to do. :-)

December 13, 2012 | Unregistered CommenterNiV

Robert,

That seems a reasonable rendition of the sceptical position. There's one tweak I'd make: climate change occurring long before the industrial revolution doesn't show it has nothing to do with human activity, it only shows it might not. But what I was thinking of was their reasons for thinking all that. Did they cite identifiable evidence for it, or did they cite elements of their cultural identity, like the positions of their political leaders or selected experts, or what their friends said?

Regarding experiment - physicists start out with a speculation, and then try to systematically identify everything else it could be. They then generate predictions from all the alternatives, and identify those for which the hypotheses give different results. They also try to identify 'surprising' or 'unlikely' consequences of the speculation, that could give a good chance of falsifying it. Experiments are performed, and the hypotheses making the wrong prediction are eliminated. If you have eliminated every other contender, in circumstances where one would reasonably expect good contenders to have arisen, you can take it as tentatively confirmed.

On the other hand, if there are a set of alternatives making the same predictions, none of which are eliminated by experiment, all of them will be retained. And if there is reason to think there might be alternatives that nobody has thought of, because the system is difficult and complicated and poorly understood with lots of places for such hypotheses to hide, then again no decision will be made.

There are also more aesthetic considerations. Does the hypothesis have more explanatory power? Is it more widely applicable? Does it unify different phenomena? Is it mathematically elegant? Is it simpler?

We also need to know the limitations and uncertainties. How accurate are its predictions? How widely has it been tested? What did we have to assume in the process? How sure are we of our measurements? If you apply it to extreme situations, are the predictions sensible?

That doesn't always work out in physics, either. For example, when General Relativity was first published, a young Indian student called Subrahmanyan Chandrasekhar used the equations to predict black holes. This result was regarded as so ridiculous, that the astrophysics Establishment led by Eddington denounced it as "absurd", Einstein himself wrote papers proving black holes to be impossible, and for a couple of decades the consensus was that Chandrasekhar's "black hole" theory was loopy. Several very prominent physicists (reportedly) privately agreed with Chandrasekhar, but wouldn't speak against the scientific establishment publicly. It was only when Eddington and the rest of the old guard retired that physics was able to move on.

This sort of thing happens every now and then.

December 13, 2012 | Unregistered CommenterNiV

NiV: if behind the paywall was our study of credibility & HPV vaccine, then this is a very near-final version of the working paper. I strongly suspect, too, that if you do a Google Scholar search for this study you’ll discover someone has uploaded the published version – outrageous!

December 13, 2012 | Registered CommenterDan Kahan

Thanks. That's useful.

I haven't read all the way through it yet. I have some other things I need to go do, so I'm not sure when I'll get back to it. But I did have a few thoughts on a partial reading, bearing in mind that I might change my mind when I've read the whole paper. This is all a bit tentative.

I was looking at table 2, comparing the effects for each cultural quadrant with no argument and with unattributed arguments.

The first thing I noticed was that when arguments were introduced all four numbers went up, which I presume means everyone thought vaccination was riskier as a result of being given more information. That suggests the arguments were not 'balanced' in the sense of carrying equal persuasive weight. I'm slightly bothered that an accidental choice of good or bad arguments here could bias the results in unanticipated ways.

The second thing I noticed was that with no argument provided, we had differences on both the IC and HE axes, the HC corner being top, HI second, and the EC and EI corners bottom equal. Only hierarchists care, and only communitarian hierarchists care a lot. (Why?) After introducing the arguments, we only had variation on the HE axis, the difference between HC and HI disappearing, and the lack of difference between EC and EI remaining. So we have raised the perception of risks and aligned them along the HE axis. I wondered if there was anything about the arguments that would emphasise hierarchist concerns.

So going back to figure 1, I had another look at the arguments with this in mind. The argument for vaccination starts off talking about a plague of sexually transmitted disease, gives an argument explaining why vaccination has to be administered early but not actually arguing for vaccination, an appeal to the authority of the FDA asserting they're safe, and an argument countering the claim that it will encourage promiscuity. The first of these will activate hierarchist concerns, the second will be seen as irrelevant, the third depends on the authority of the FDA, and only the fourth addresses the point. All of these arguments are along the HE axis, there is no explicit argument on personal liberty, or cost-effectiveness, or harm to others. (The argument for making vaccination mandatory is that you need a certain proportion of the population vaccinated to prevent epidemics; not vaccinating puts others at risk, not just yourself. This is Mill's Harm Principle.)

The arguments against counter each of the arguments for. The first says HPV is not a major concern, and reveals the first argument for to be a misleading use of statistics. The second notes the limited effect - I'm not sure how that's an effective argument. The third addresses the fourth of the arguments for, and while not entirely logically sound, or a valid counter, has the effect of asserting its conclusion raising hierarchist concerns. And the fourth addresses the third of those for, reminding the reader that the FDA is not an entirely reliable authority. Given no statistics in the FDA's support, that's likely to have an effect.

So I can see why the effects are as they are. The arguments against are perceived as better than those for, and all quadrants agree, even those who are naturally not inclined that way. This effect is not necessarily strictly additive. And more importantly here, all the arguments address the hierarchist concerns (or generic ones, like safety), and so people considering these arguments will be thinking about where they are on the hierarchist-egalitarian axis, and will forget about other considerations.

The hypothesis predicts a strong polarisation increase along the HI-EC diagonal, without a corresponding strong polarisation increase along the opposite HC-EI diagonal, and that's what we get. But the latter was already strongly polarised, because of hierarchist concerns about promiscuity not mitigated by individualist concerns about liberty. Polarisation increases along the other axis because the pro/anti hierarchist arguments remind the conflicted HIs that they are hierarchists.

Thus, the observation could be explained by a salience effect rather than a biased assimilation effect. By concentrating attention on one particular aspect of the debate, it removed the influence of counterbalancing aspects.

It might help to discuss explicitly the ranges of alternative hypotheses excluded. Even systematising and setting out the possible alternatives might help - if that's at all possible. I'm not sure that it is, at this stage of the game.

I don't know. I've only skimmed part of the paper and considered it for a couple of hours. But it seems to me there's a risk of 'confirming the consequent' in making predictions about particular statistics when it isn't made clear why only that hypothesis can make that prediction. Did you ever read Feynman's anecdote about Mr Young's experiments on rats in mazes? You ideally need an "A-number-one" experiment in this sense to isolate exactly what you want to measure, in a controllable way. It won't tell you anything about people, but it will tell you what you need to do to find out about people.

It's extremely difficult to do, and I sympathise. Understanding humans scientifically is an immense challenge, and I'm always impressed that people are brave enough to take it on. Physicists like Feynman have it easy.

December 14, 2012 | Unregistered CommenterNiV

@NiV:

1. whenever people start to process information about risk -- even information about lowering risks! -- their concern about risk goes up. This is a pretty well established dynamic.

2. In our study, we were looking at individual differences: how would people w/ different worldviews react to the same information, or to the same informtion when the sources were manipulated. The hypotheses about differences are unaffected by the "main effect" of everyone being more concerned when they see information.

3. Yes, I've read Feynman on rats. As always, we formed hypotheses in advance; we didn't make a bunch of measurements and then tell a story about them. The nature of the hypotheses are set forth in the paper.

4. The hypotheses reflect a theory that has informed additional, related hypotheses that we've tested in relation to other issues & w/ other designs. So sure, there are likely to be counter-explanations for the results we got in this study; this is always always always true, of any experiment. That’s why it makes sense to do lots of related studies.

5. As for "one diagonal"-- some risks lie along one diagonal & some another. One needs a theoretical explanation about why this is so. We do supply one -- in HPV paper & elsewhere. We've done studies, too, where we hypothesized and observed polarization primarily along the HC/EI diagonal.

December 15, 2012 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>