follow CCP

Recent blog entries
« Sunstein on "biased assimilation" & ideologically credible messengers | Main | I love Bayes -- and you can too! »
Monday
Sep102012

Culturally polarized Australia: Cross-cultural cultural cognition, Part 3 (and a short diatribe about ugly regression outputs)

In a couple of previous posts (here & here), I have discussed the idea of "cross-cultural cultural cognition" (C4in general and in connection with data collected in the U.K. in particular. In this one, I'll give a glimpse of some cultural cognition data from Australia. 

Australia CC scalesThe data come from a survey of large, diverse general population sample. It was administered by a team of social scientists led by Steven Hatfield-Dodds, a researcher at the Australian National University. I consulted with the Hatfield-Dodds team on adaptation of the cultural cognition measures for use with Australian survey respondents.

It was a pretty easy job! Although we experimented with versions of various items from the "long form" cultural cognition battery, and with a diverse set of items distinct from those, the best performing set consisted of the two six-item sets that make up the "short form" versions of the CC scales. The items were reworded in a couple of minor ways to conform to Australian idioms.

Scale performance was pretty good. The items loaded appropriately on two distinct factors corresponding to "hierarchy-egalitarianism" and "individualism-communitarianism," which had decent scale-reliability scores. I discussed these elements of scale performance more in the first couple of posts in the  Cseries. 

 

The Hatfield-Dodds team included the CC scales in a wide-ranging survey of beliefs about and attitudes toward various aspects of climate change. Based on the results, I think it's fair to say that Australia is at least as culturally polarized as the U.S.

The complexion of the cultural division is the same there as here. People whose values are more egalitarian and communitarian tend to see the risk of climate change as high, while those whose values are more hierarchical and individualistic see it as low. This figure reflects the size of the difference as measured on a "climate change risk" scale that was formed by aggregating five separate survey items (Cronbach’s α = 0.90):

Looking at individual items helps to illustrate the meaning of this sort of division -- its magnitude, the sorts of issues it comprehends, etc.  

Asked whether they "believe in climate change," e.g., about 50% of the sample said "yes." Sounds like Australians are ambivalent, right? Well, in fact, most of them are pretty sure -- they just aren't, culturally speaking, of one mind. There's about an 80% chance that a "typical" egalitarian communitarian," e.g., will say that climate change is definitely happening; the likelihood that a hierarchical individualist will, in contrast, is closer to 20%.


There's about a 25% chance the hierarchical individualist will instead say, "NO!" in response to this same question. There's only a 1% chance that an egalitarian communitarian in Australia will give that response!

BTW, to formulate these estimates, I fit a multinomial logistic regression model to the responses for the entire sample, and then used the parameter estimates (the logit coefficients and the standard errors) to run Monte Carlo simulations for the indicated "culture types." You can think of the simulation as creating 1,000 "hierarch individualists" and 1,000 "egalitarian communitarians" and asking them what the they think. By plotting these simulated values, anyone, literally, can see, literally, the estimated means and the precision of those estimates associated with the logit model. No one -- not even someone well versed in statistics -- can see such a result like in a bare regression output like this:

Yet this sort of table is exactly the kind of uninformative reporting that most social scientists (particularly economists) use, and use exclusively.  There's no friggin' excuse, for this, either, given that public-spirited stats geniuses like Gary King have not only been lambasting this practice for years, but also producing free high-quality software like Clarify, which is what I used to run the Monte Carlo simulations here (the graphic reporting technique I used--plotting the density distributions of the simulated values to illustrate the size and precision of contrasting estimates--is something I learned from King's work too).

So don't be awed the next time someone puts a mindless table like this in a paper or on a powerpoint slide; complain!

Oh .... There are tons of cool things in the Hatfield-Dodds et al. survey, and I'm sure we'll write them all up in the near future. But for now here's one more result from the Australia Cstudy:

Around 20% of the survey respondents indicated that climate change was caused either "entirely" or "mainly" by "nature" rather than by "human activity."  But the likelihood that a typical hierarchical individualist would view climate change was around 48% (+/-, oh, 7% at 0.95 confidence, by the looks of the graphic). Only about 5% chance an egalitarian communitarian would treat humans as an unimportant contributor to climate change.

You might wonder how about 50% of the hierarch individualists one might find in Australia would likely tell you that "nature" is causing climate change when less than 25% are likely to say "yes" if you ask them whether climate change is happening.

But you really shouldn't. You see, the answers people give to individual questions on a survey on climate change aren't really answers to those questions. They are just expressions of a global pro-con attitude toward the issue. Psychometrically, the answers are observable "indicators" of a "latent" variable. As I've explained before, in these situations, it's useful to ask a bunch of different questions and aggregate them-- the resulting scale (which will be one or another way of measuring the covariance of the responses) will be a more reliable (i.e., less noisy) measure of the latent attitude than any one item.  Although if you are in a pinch -- and don't want to spend a lot of money or time asking questions -- just one item, "the industrial strength risk perception measure," will work pretty well!

The one thing you shouldn't do, though, is get all excited about responses to specific items or differences among them. Pollsters will do that because they don't really have much of a clue about psychometrics.

Hmmm... maybe I'll do another post on "pollster" fallacies -- and how fixation on particular questions, variations in the responses between them, and fluctuations in them over time mislead people on public opinion on climate change.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

There are a lot of studies out now on the correlation of beliefs about climate with ideology. What isn't covered so well (AFAIK) is people's self-reported *reasons* for their beliefs.
(Although I'd be interested to know if the correlation also extends to exaggerated alarm - i.e. not just 'do you believe the world is warming?' but 'do you believe the warming constitutes imminent catastrophe? Nations sinking into the sea, all the crops stop growing, mass extinctions, wars and plagues, end of humanity, Venus-like runaway boiling oceans, etc.')

There's a lot of work going on trying to deduce it indirectly from survey correlations, but the simplest starting point is surely to ask. Do you believe what you believe as a result of individual study of the scientific evidence, listening to experts, newspapers and TV, friends and social contacts, faith in science generally, governments and politicians, climate blogosphere, the history of apocalyptic predictions, surveys, etc.

Then get more detail. If they trust experts, which experts do they trust, and on what criteria? If they study the scientific evidence, does that mean just reading the literature, or processing the data yourself? How much climate science do they know? If they trust governments and officialdom, does it vary by party? What sort of social contacts do they discuss it with?

What intellectual/scientific principles do people subscribe to? Is science a search for truth, or a search for new interpretations and ever-more accurate models? Do people accept or reject authority and reputation as a factor? Do they demand transparency and replicability? Do they seek consensus and go with the mainstream, or do they particularly admire scientific revolutionaries and loners? Do they count publications and citations, consider seniority, compare qualifications and awards, judge by past work? Do they value challenge, and devil's advocacy? Do they know about the history of scientific frauds and failures of the past? How tolerant are they of cranks?

Do people misreport their beliefs to fit in with what they think you're asking rather than what you actually asked? To what extent? E.g. a lot of sceptics believe the world is warming, but may report otherwise if they think their answer will be interpreted as 'do you believe in global warming alarm?'

It seems to me that until you understand the mechanisms influencing belief, it's very hard to interpret other correlations. I would agree you can't just accept their answers as the real reason - people rationalise their beliefs - but it gives you hypotheses to check.

Tying belief to political orientation is interesting, but only takes you so far. (And is itself used as ammunition in those ideological battles, which confuses the question. People may seek to influence the political balance through your surveys.) OK, so it's correlated to orientation. But why? We've only just scratched the surface of the most interesting questions.

September 10, 2012 | Unregistered CommenterNiV

@NiV: I don't have the same impression, I guess. Social scientists who do survey analyses of public opinion on climate change ask those questions -- do you trust scientists, where do you get information, etc -- all the time. But correlating the answers people give to such questions & their opinions also involves inferences that are open to question. Well-designed experiments can help to get at the mechanisms that explain correlations between ideologies or like predispositons & perceptions of risk & related facts but they too involve judgments & inferences the strength of which depends on theories & information beyond the studies in question. What to say? It's true that some scholars draw weak or ill-supported inferences from correlations, but that's just because some scholars -- like some heart surgeons, heads of state, bus drivers, etc. -- do a bad job.

September 10, 2012 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>