Is teen pregnancy a greater societal risk than climate change?! Cross-cultural cultural cognition part 2
This is the second in a series of posts on cross-cultural cultural cognition (C4).
C4 involves the application of cultural cognition to non-US samples. In the first post, I addressed certain conceptual and theoretical issues relating to C4. Now I’ll present some actual data.
I had thought I’d do both the UK and Australia in one post, but now it seems to me more realistic to break them up. So let’s make this at least a three-part series—with the UK and Australia data presented in sequence.
Maybe we’ll even make it four, since there’s also been some Canadian research. I didn’t participate in it to any significant extent, but it is really cool & of course pertinent to the topic.
Part 2. UK
As I explained last time, C4 hypothesizes that the motivating dispositions associated with Mary Douglas’s group-grid framework—“hierarchy-egalitarianism”(HE) and “individualism-communitarianism” (IC)—generalize across societies but expects the latent-variable indicators of those dispositions to be society specific. C4 also anticipates that the mapping of risk perceptions on to the group-grid dispositions will vary across societies.
Accordingly, for both the UK and Australia, I’ll start with a summary of the data on the indicators and then turn to risk perception findings.
In cultural cognition research, HE and IC are conceptualized as latent variables, which are measured by scales constructed by aggregating responses to attitudinal items, which are thus conceptualized as the observable latent-variable indicators.
Our goal in this work—which I conducted with Hank Jenkins-Smith, Tor Tarantola, & Carol Silva in the spring & summer of 2011—was to adapt to the UK the six-item “short form” versions of the HE and IC scales that we’ve used in studies of US samples. Successful “adaptation” means the construction of reliable scales that we have reason to believe measure the same dispositions in the UK subjects as they do in the US ones.
Reliability refers to those properties of the scale that furnish reason to believe that the items that it comprises are actually measuring some common, latent disposition. A common test of reliability is “Cronbach’s α,” which is based on inter-item correlation. A score of 0.70 or above (the top score is 1.0) is generally considered adequate.
Factor analysis is another test. There are various forms of factor analysis, but the basic idea is to determine whether the covariance patterns in responses to the data are consistent with existence of hypothesized latent variables. Because the twelve worldview items are hypothesized to be measures of two discrete latent dispositions, we expect variance in responses to be accounted for by two orthogonal “factors,” onto which the HE and IC sets items appropriately “load” (correlate, essentially; factor “loadings” are typically regression coefficients).
Following an initial pretesting phase in which Tor did most of the heavy lifting (using his own best judgment to start, then soliciting responses from other researchers, and from pretest subjects—a form of “cognitive testing”), we felt confident enough in our UK versions of HE and IC to conduct a large general population survey. The sample consisted of 3000 individuals—1500 from England and 1500 from the US. The subjects were recruited by YouGov/Polimetrix, a leading public opinion survey firm, which administered the appropriate version (UK or US) of the survey to the subjects via the internet.
The results of these tests for both the US and the UK samples is reflected in this figure:
It shows, in effect, that for both samples the items “loaded” in patterns that suggested the expected relationship between the HE and IC sets and two latent distortions. The Cronbach’s α’s for each set was also greater than 0.70 for both samples. These results furnish solid ground for concluding that the UK scales, like the US ones, are reliably measuring discrete dispositional tendencies, which manifest themselves in opposing patterns of survey-item responses. (Actually, the UK versions of the scales behave a bit better here than the US versions, which are displaying a bit more attraction to each other than they usually do!)
As I said, we also want to be confident that the dispositional tendencies being measured in the UK subjects by the UK versions of HE and IC are the same as the dispositional tendencies being measured in the US subjects by the corresponding US scales. This is the cross-cultural analog to scale validity, which refers to the correspondence between what a reliable scale is actually measuring and the phenomenon it is supposed to be measuring.
A common strategy for cross-culturally validating scales is to compare the factor or component structures across samples. By design, each HE and IC item in the US set is matched with a corresponding HE and IC item in the UK set. The coefficient of congruence measures the similarity of the loadings of the various items on the extracted factor or component scores; a high coefficient signifies that the “factor structure” is sample “invariant”—i.e., that the relationship between the respective sets of items and the latent variable they are deemed to be measuring does not vary across the samples. The likelihood that they would just happen to exhibit this sort of structural similarity if the corresponding sets of items were not measuring the same latent variable is considered remote.
There is conventionally deemed to be sufficient ground for treating scales as measuring the same dispositions across distinct national samples when the coefficient of congruence is greater than 0.90. The coefficients of congruence for the US and UK versions of HE and IC were 0.99 and 0.94, respectively.
B. Comparative culture-risk mappings
Now the really fun stuff. What can we learn—if anything!—from comparing risk perceptions in the US & UK samples?
In the study, we solicited responses to 24 putative risk sources using the “industrial strength risk measure.” In this figure, I’ve plotted out the mean IM ratings for each sample separately:
The respective samples’ rankings are not wildly out of synch but there are definitely some interesting differences. People in the UK, e.g., are much more concerned about guns than are people in the US. People in the UK also appear more uptight about marijuana (surprising to me, but what do I know?) and more alarmed about immigration (huh! but I actually had an inkling of that). There less concerned about “tea party” sorts of risks (let’s call them)—one associated with excessive regulation and government spending—but not by that much.
Similarities are interesting, too. Both countries are terrified of illegal drug trafficking—lame!
Both freaked out about terrorism. Of course.
Neither is very worked up about global warming. Second-hand cigarette smoke is apparently much more of a concern. In the US, climate change is viewed as posing a lesser danger to society than teen pregnancy!
And look at childhood vaccinations: That concerns the members of both national samples the least—by far. One has to wonder whether the “vaccine hesitancy” scare is a bit trumped up….
But much much more interesting is this:
This figure how much cultural variance there is in each society, and how it differs across the two.
The graphs are beautifully noisy! That’s the first thing worth noting: it shows that looking at sample-wide means for risks (individual ones of which are arrayed in the same order as in the last figure—in ascending order of overall concern in the US) grossly understates how much systematic division there with each society!
Climate change generates lots of division in both. Moreover, the character of the division is similar: hierarchical individualists and egalitarian communitarians are the most divided, with hierarchical communitarians and egalitarian individualists divided too, but less so, in between.
Once one adds culture to the picture, moreover, it becomes clear how misleading it can be to talk about "societal" perceptions of risk on things like climate change and teen-pregnancy--the "societal means" for which conceal widely divergent assessments across cultural groups.
Immigration risks are also divisive in both societies, and terrorism too. The cultural cleavages look comparable.
There’s also more cultural division here than there on "deviancy risks"—US egalitarian individualists poo poo the dangers of marijuana smoking and teenage pregnancy, as hierarchical communitarians quake.
And look again at childhood vaccines: no meaningful cultural division at all in either society. The “vaccine hesitators” might have a shared cultural view of some sort, but it’s much more specialized and boutiquey than any of the ones that figure in the risk conflicts of real consequence in these societies.
Also not a tremendous amount of variation on risks of illegal street drugs. That’s something to worry about, in my view….
There’s more, including the geoengineering experiment results, which I’ve featured in other posts and which are set out more completely in CCP Working Paper No. 92. Suffice it to say that we got results that were very comparable for both samples, as one might expect given the parallel cultural divisions in the two societies.
Last point: There’s plenty of cultural variance in the UK sample, but definitely less than there is in the US. What to make of that?
One possibility: the UK is just less culturally divided than the US. Maybe.
But another possibility is that our scales just aren’t as good at measuring cultural worldviews in the UK and thus aren’t able to discern it with the same precision there as here.
I actually think that’s more likely—or at least a bigger part of the explanation for the differing levels of cultural conflict. After all, our measures were designed—painstakingly; it took quite a while to get scales that worked, and then to figure out how to condense them from 30 items to 12—for the US general public. I think we did a decent enough job for now in getting them to work in the UK (it wasn’t as hard as I expected!), but it would be shocking if we had managed to achieve the same level of measurement fidelity.
But in any case, there’s definitely more work to be done to figure out what’s going on.
Caprara, G.V., Barbaranelli, C., Bermúdez, J., Maslach, C. & Ruch, W. Multivariate Methods for the Comparison of Factor Structures in Cross-Cultural Research. J. Cross-Cultural Psychol. 31, 437-464 (2000).
Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).