follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« MAPKIA! episode 2: what do alpha, beta, gamma & delta think about childhood vaccine risks? And where's the tea party?! | Main | What does a valid climate-change risk-perception measure *look* like? »

Why cultural predispositions matter & how to measure them: a fragment ...

Here's a piece of something I'm working on--the long-promised & coming-soon "vaccine risk-perception report." This section discusses the "cultural predisposition" measurement strategy that I concluded would be most useful for the study. The method is different from the usual one, which involves identifying subjects' risk predispositions with the two "cultural worldview" scales. I was going to make this scheme the basis of a  "MAPKIA!" contest in which players could make predictions relating to characteristics of the 4 risk-disposition groups featured here and their perceptions of risks other than the ones used to identify their members. But I decided to start by seeing what people thought of this framework in general. Indeed, maybe someone will make observations about it that can be used to test and refine the framework -- creating the occassion for the even more exciting CCP game, "WSMD? JA!"

 C.  Cultural Cognition

1.  Why cultural predispositions matter, and how to measure them

Public contestation over societal risks is the exception rather than the norm.  Like the recent controversy over the HPV vaccine and the continuing one over climate change, such disputes can be both spectacular and consequential. But for every risk issue that generates this form of conflict, there are orders of magnitude more—from the safety of medical x-rays to the dangers of consuming raw milk, from the toxicity of exposure to asbestos to the harmlessness of exposure to cell phone radiation—where members of the public, and their democratically accountable representatives, converge on the best available scientific evidence without incident and hence without notice.

By empirical examination of instances in which technologies, public policies, and private behavior do and do not become the focus for conflict over decision-relevant science, it becomes possible to identify the signature attributes of the former. The presence or absence of such attributes can then be used to test whether a putative risk source (say, GM foods or nanotechnology) has become an object of genuine societal conflict or could (Finucane 2005; Kahan, Braman, Slovic, Gastil & Cohen 2009). 

Such a test will not be perfect. But it will be more reliable than the casual impressions that observers form when exposed either to deliberately organized demonstrations of concern, which predictably generate disproportionate media coverage, or to spontaneous expressions of anxiety on the part of alarmed individuals, whose frequency in the population will appear inflated by virtue of the silence of the great many more who are untroubled. Because they admit of disciplined and focused testing, moreover, empirically grounded protocols admit of systematic refinement and calibration that impressionistic alternatives defiantly resist.  

One of the signature attributes of genuine risk contestation, empirical study suggests, is the correlation of positions on them with membership in identity-defining affinity groups—cultural, political, or religious (Finucane 2005). Individuals tend to form their understandings of what is known to science inside of close-knit networks of individuals with whom they share experience and on whose support they depend. When diverse  groups of this sort disagree about some societal risk, their members will thus be exposed disproportionately to competing sources of information. Even more important, they will experience strong psychic pressure to form and persist in views associated with the particular groups to which they belong as a means of signaling their membership in and loyalty to it. Such entanglements portend deep and persistent divisions—ones likely to be relatively impervious to public education efforts and indeed likely to be magnified by the use of the very critical reasoning dispositions that are essential to genuine comprehension of scientific information (Kahan, Peters et al. 2012; Kahan 2013b; Kahan, Peters, Dawson & Slovic 2013).

These dynamics are the focus of the study of the cultural cognition of risk.  Research informed by this framework uses empirical methods to identify the characteristics of the affinity groups that orient ordinary members of the public with respect to decision-relevant science, the processes through which such orientation takes place, the conditions that can transform these same processes into sources of deep and persistent public conflict over risk, and measures that can be used to avoid or neutralize these conditions (Kahan 2012b).

Such groups are identified by methods that feature latent-variable measurement (Devellis 2012). The idea is that neither the groups nor the risk-perception dispositions they impart can be observed directly, so it is necessary instead to identify observable indicators that correlate with these phenomena and combine them into valid and reliable scales, which then can be used to measure their impact on particular risk perceptions.

 One useful latent-variable measurement strategy characterizes individuals’ cultural outlooks with two orthogonal attitudinal scales—“hierarchy-egalitarianism” and “individualism-communitarianism.” Reflecting preferences for how society and other collective endeavors should be structured, the latent dispositions measured by these “cultural worldview” scales, it is posited, can be expected to vary systematically among the sorts of affinity groups in which individuals form their understandings of decision-relevant science. As a result, variance in the outlooks measured by the worldview scales can be used to test hypotheses about the extent and sources of public conflict over various risks, including environmental and public-health ones (Kahan 2012a; Kahan, Braman, Cohen, Gastil & slovic 2010).

This study used a variant of this “cultural worldview” strategy for measuring the group-based dispositions that generate risk conflicts: the “interpretive community” method (Leiserowitz  2005). Rather than using general attitudinal items, the interpretive community method measures individuals’ perceptions of various contested societal risks and forms latent-dispositions scales from these. The theory of cultural cognition posits—and empirical research corroborates—that conflicts over risk feature entanglement between membership in important affinity groups and competing positions on these issues.  If that is so, then positions on disputed risks can themselves be treated as reliable, observable indicators of membership in these groups—or “interpretive communities”—along with the unobservable, latent risk-perception dispositions that membership in them imparts.

The interpretive community-strategy would obviously be unhelpful for testing hypotheses relating to variation in the very risk perceptions (say, ones toward climate change) that had been used to construct the latent-predisposition scales. In that situation, the interdependence of the disposition measure (“feelings about climate change risks”) and the risk perception under investigation  (“concerns about climate change”) would inject a fatal source of endogeneity into any empirical study that seeks to treat the former as an explanation for or cause of the latter.

But where the risk perception in question is genuinely distinct from those that formed the disposition indicators, there will be no such endogeneity. Moreover, in that situation, interpretive-community scales will offer certain distinct advantages over latent-disposition measured formed by indicators based on general attitude scales (cultural, political, etc.) or other identifying characteristics associated with the relevant affinity groups.

Because they are measures of an unobserved latent variable, any indicator or set of them will reflect measurement error.  In assessing variance in public risk perceptions, then, the relative quality of any alternative latent-variable measurement scheme will thus consists in how faithfully and precisely it captures variance in the group-based dispositions that generate conflict over societal risks. “Political outlooks” might work fairly well, but “cultural worldviews” of the sort typically featured in cultural cognition research will do even better if they in fact capture variance in the motivating risk-perception dispositions in a more discerning manner. Other alternatives might be better still, particularly if they validly and reliably incorporate other characteristics that, in appropriate combinations,[1] indicate the relevant dispositions with even greater precision.

But if the latent disposition one wants to measure is one that has already been identified with signature forms of variance in certain perceived risks, then those risk perceptions themselves will always be more discerning indicators of the latent disposition in question than any independent combination of identifying characteristics.  No latent-variable measure constructed from those identifying characteristics will correlate as strongly with that risk-perception disposition as the pattern of risk perceptions that it in fact causes. Or stated differently, the covariance of the independent identifying characteristics with the latent-variable measure formed by aggregation of the subjects’ risk perceptions will in fact already reflect, with the maximum degree of precision that the data admits, the contribution that other those characteristics could have made to measuring that same disposition.

The utility of the interpretive-community strategy, then, will depend on the study objectives. Again, very little if anything can be learned by using a latent-disposition measure to explain variance in the very attitudes that are the indicators of it.  In addition, even when applied to a risk perception distinct from the ones used to form the latent risk-predisposition measures, an “interpretive community” strategy will likely furnish less explanatory insight than would a latent-variable measure formed with identifying characteristics that reflect a cogent hypothesis about which social influences are generating these dispositions and why.

But there are two research objectives for which the interpretive-community strategy is likely to be especially useful.  The first is to test whether a putative risk source provokes sensibilities associated with any of the familiar dispositions that generate conflict over decision-relevant science—or whether it is instead one of the vastly greater number of technologies, private activities, or public policies that do not. The other is to see whether particular stimuli—such as exposure to information that might be expected to suggest associations between a putative risk source and membership in important affinity groups—provokes varying risk perceptions among individuals who vary in regard to the cultural dispositions that such groups impart in their members.

Those are exactly the objectives of this study of childhood vaccine risks.  Accordingly, the interpretive community strategy was deemed to be the most useful one.

2. Interpretive communities and vaccine risks


Figure 14. Factor loadings of societal risk items. Factor analysis (unweighted least squares) revealed that responses to societal risk items formed two orthogonal factors corresponding to assessments of putative “public-safety” risks and putative “social-deviancy” risks, respectively. The two factors had eigenvalues of 4.1 and 1.9, respectively, and explained 61% of the variance in study subjects’ responses to the individual risk items.

Study subjects indicated their perceptions of a variety of risks in addition to ones relating to childhood vaccines—from climate change to exposure to second-hand cigarette smoke, from legalization of marijuana to private gun possession. These and other risks were selected because they are ones that are well-known to generate societal conflict—indeed, conflict among groups of individuals who subscribe to loosely defined cultural styles and whose positions on these putative hazards tend to come in recognizable packages.

Factor analysis confirmed that the measured risk perceptions—eleven in all—loaded on two orthogonal dimensions.  One of these consisted of perceptions of environmental risks, including climate change, nuclear power, toxic waste disposal, and fracking, as well as risks from hand-gun possession and second-hand cigarette smoke.  The second consisted of the perceived risks of legalizing marijuana, legalizing prostitution, and teaching high school students about birth control. 

The factor scores associated with these two dimensions were labeled “PUBLIC SAFETY” and “SOCIAL DEVIANCY,” each of which was conceived of as a latent risk-disposition measure.  Support for the validity of treating them as such was their appropriate relationships, respectively, with the Hierarchy-egalitarianism and Individualism-communitarianism worldview scales, which in previous studies have been used to predict and test hypotheses relating to risk perceptions of the type featured in each factor.


Figure 15. Risk-perception disposition groups.  Scatter plot arrays study subjects with respect to the two latent risk-perception dispositions. Axes reflect subject scores on the indicated scales.

Because they are orthogonal, the two dimensions can be conceptualized as dividing the population into four interpretive communities (“ICs”): IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  The intensity of the study subjects' commitment to one or the other of these groups can be measured by their scores on the public-safety and societal-deviancy risk-perception scales.

Members of these groups vary in respect to individual characteristics such as cultural worldviews, political outlooks, religiosity, race, and gender.  IC-αs tend to be more “liberal” and identify more strongly with the Democratic Party,” and are uniformly “egalitarian” in their cultural outlooks. IC‑βs, who share the basic orientation of the IC-αs on risks associated with climate change and gun possession but not on ones associated with legalizing drugs and prostitution, are more religious and more African-American, and more likely to have a “communitarian” cultural outlook than IC-αs. IC-γs include many of the “white hierarchical and individualistic males” who drive the “white male effect” observed in the study of public risk perceptions (Finucane et al. 2000; Flynn et al. 1994; Kahan, Braman, Gastil, Slovic & Mertz 2007).  Like IC-βs, with whom they share concern over deviancy risks, IC-δs are more religious and communitarian; they are less male and less individualistic than IC- γs, too, but like members of that group, IC- δs are whiter, more conservative and Republican in their political outlooks, and more hierarchical in their cultural ones than are IC-βs.

These characteristics cohere with recognizable cultural styles known to disagree over issues like these (Leiserowitz 2005). Appropriate combinations of those characteristics, combined into alternative latent measures, could have predicted similar patterns of variance with respect to these risk perceptions, although not as strongly as the scales derived through a factor analysis of the covariance matrixes of the risk perception items themselves.

Vaccine-risk perceptions  . . .



Berry, W.D. & Feldman, S. Multiple Regression in Practice. (Sage Publications, Beverly Hills; 1985).

Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, Edn. 3rd. (L. Erlbaum Associates, Mahwah, N.J.; 2003).

DeVellis, R.F. Scale Development : Theory and Applications, Edn. 3rd. (SAGE, Thousand Oaks, Calif.; 2012).

Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Finucane, M.L. & Holup, J.L. Psychosocial and Cultural Factors Affecting the Perceived Risk of Genetically Modified Food: An Overview of the Literature. Social Science & Medicine 60, 1603-1612 (2005).

Flynn, J., Slovic, P. & Mertz, C.K. Gender, Race, and Perception of Environmental Health Risk. Risk Analysis 14, 1101-1108 (1994).

Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. (Cambridge University Press, Cambridge ; New York; 2007).

Kahan, D. Why We Are Poles Apart on Climate Change. Nature 488, 255 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law and Human Behavior 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Englightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012).

Leiserowitz, A.A. American Risk Perceptions: Is Climate Change Dangerous? Risk Analysis 25, 1433-1442 (2005).

Lieberson, S. Making It Count : The Improvement of Social Research and Theory. (University of California Press, Berkeley; 1985).



[1] A multivariate-modeling strategy that treats all such indicators or all potential ones as “independent” right-hand side variables will not be valid. The group affiliations that impart risk-perception dispositions are indicated by combinations of characteristics—political orientations, cultural outlooks, gender, race, religious affiliations and practices, residence in particular regions, and so forth. But these characteristics do not cause the disposition, much less cause it by making linear contributions independent of the ones made by others.  Indeed, they validly and reliably indicate particular latent dispositions only when they co-occur in signature combinations. By partialing out the covariance of the indicators in estimating the influence of each on the outcome variable, a multivariate regression model that treats the indicators as “independent variables” is thus necessarily removing from its analysis of each predictor's impact the portion of it that it owes to being a valid measure of the latent variable and estimating that influence instead based entirely on the portion that is noise in relation to the latent variable.  The variance explained (R2) for such a model will be accurate. But the parameter estimates will not be meaningful, much less valid, representations of the contribution that such characteristics make to variance in the risk perceptions of real-world people who vary with respect to those characteristics (Berry & Feldman 1985, p. 48; Gelman & Hill 2006, p. 187). To model how the latent disposition these characteristics indicate influence variance in the outcome variable, the characteristics must be combined into valid and reliable scales. If particular ones resist scaling with others—as is likely to be the case with mixed variable types—then excluding them from the analysis is preferable to treating them as independent variables: because they will co-vary with the latent measure formed by the remaining indicators, their omission, while making estimates less precise than they would be if they were included in formation of the composite latent-variable measure, will not bias regression estimates of the impact of the composite measure (Lieberson 1985, pp. 14-43; Cohen, Cohen, West &  Aiken 2003, p. 419).  Misunderstanding of (or more likely, lack of familiarity with) the psychometric invalidity of treating latent-variable indicators as independent variables in a multivariate regression is a significant, recurring mistake in the study of public risk perceptions. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (27)

How about the "risk of societal collapse due to an individual mandate for health insurance" - where folks who might have registered with one disposition, who now think that the risk is very high, would have considered a mandate a non-risk (in fact a negative risk) only a few short years ago.

Seems to me that your list of risks would inevitably fall out in easily predictable ways, consistent with preexisting, politically-oriented affinity groups. In other words I think I see " latent-disposition measure[s] to explain variance in the very attitudes that are the indicators of it."

As a snapshot in time, I'm not sure I can think of any measures that would really be instructive. However, what would be interesting to me would be some kind of longitudinal data - and I predict that they would show that basically any latent measures of risk perception could shift with the wind but that discrete groups of people will move in more or less lockstep fashion between one world view and another, or into varying orientations along any particular measure, depending on the issue.

December 2, 2013 | Unregistered CommenterJoshua

@Joshua: what about GM food risks? Or vaccine risks? Or synbio?

I *agree* this is unhelpful way to explain variance in the very risks the latent-variable measure comprises! But what about trying to figure out how public feels about ones where there is disagreement about whether there is ppolarization etc? Or how they mighrt feel about new ones?

Actually, that *fracking* indicats PS disposition is pretty cool. We know that most peopole don't know much about it; but they can "locate" it easily enough w/ respect to their disposition -- so already polarized

December 2, 2013 | Unregistered Commenterdmk38

"@Joshua: what about GM food risks? Or vaccine risks? "

Good point. Despite that people often claim that they can predict disposition on those issues by virtue of standard political patterns, in fact they don't seem to fall out that way. Then again, however, they remain relatively uncontroversial (on a broad scale).

But a healthcare mandate is controversial - or at least it became significantly so as soon as Obama tried to pass one.

Or how about budget deficits? Remember Dick ("deficits don't matter") Cheney?

Former Treasury Secretary Paul O’Neill was told “deficits don’t matter” when he warned of a looming fiscal crisis.

O’Neill, fired in a shakeup of Bush’s economic team in December 2002, raised objections to a new round of tax cuts and said the president balked at his more aggressive plan to combat corporate crime after a string of accounting scandals because of opposition from “the corporate crowd,” a key constituency.

O’Neill said he tried to warn Vice President Dick Cheney that growing budget deficits-expected to top $500 billion this fiscal year alone-posed a threat to the economy. Cheney cut him off. “You know, Paul, Reagan proved deficits don’t matter,” he said, according to excerpts. Cheney continued: “We won the midterms (congressional elections). This is our due.” A month later, Cheney told the Treasury secretary he was fired.

Isn't it interesting how Repubs and Dems have largely traded views on budget deficits? World views, schmorld views. What matters most is group affinity, and identifying the "other."

December 2, 2013 | Unregistered CommenterJoshua

It seems to me that the points high on the safety-risk axis are all perceived by "the left" as high risk. "The right" has its own list of threats to public safety and well-being, which have not been included. For example:

Excessive taxation,
Excessive regulation,
Lax and selective enforcement of immigration laws,
Teaching of evolution,
Gun control,
The welfare state.

Some more threats which are less right-oriented:

Legalization of gambling,
Food poisoning.

If these are to be discarded as threats of low or zero importance, we have to be sure we are not engaging in identity-protective cognition of what constitutes a threat to public safety and well-being.

Also, regarding the points high on the safety/risk axis, I would classify air pollution and global warming as "tragedy of the commons" examples, in which the response of the right and the left are predictable. In the tragedy of the commons, a common area for various shepherds is destroyed because no shepherd has an incentive to manage the commons, only to take as much as possible before the next shepherd uses it. The "right" solution is to divide the commons into privately owned parcels, the "left" solution is central control of each shepherd's grazing activity. The right tends to favor a free market, which requires clearly defined property rights. When property rights are difficult to define, as is the case with air pollution and global warming (who owns the atmosphere?), then central control has a stronger argument. Those whose political identity favors central control will be prone to identity-protective cognition of the magnitude of the threat, exaggerating it, while those whose political identity favors decentralized control will be prone to minimizing it.

December 3, 2013 | Unregistered CommenterFrankL

Some other items that might be added (there might be some overlap with those FrankL mentioned).

Risk of public education (in particular vs. private education)
Risk of building publicly financed infrastructure
Risk of anti-Christian prejudice (e.g., a "War against Christmas")
Risk of same-sex marriage
Risk of the UN trying to use climate change to implement a one-world government

Although FrankL listed "terrorism" as being less right-oriented, the the risk of Sharia law is more or less completely from the right.

On the left:
Risk of increased income inequality.

I'd say that all of these are controversies that exist about potential "risk," that are inclusive of high certainty about the veracity of dueling "experts" who claim ownership over the best available evidence.

December 4, 2013 | Unregistered CommenterJoshua

I am always looking at the HE-IC plots and thinking that there is a missing axis - the central control versus decentralized control axis. Maybe being of a libertarian bent, I am exaggerating the importance of this, but maybe not. Looking at the red-blue map of electoral results, county by county, its very apparent that blue (Democrat) counties are in or near urban centers while red (Republican) counties tend to be rural. Its glaringly obvious that rural and urban are two different cultures. Rural people are under pressure to "get along" despite their differences since they run into and do business with each other every day, they know everybody, everybody knows them, while urban folks almost have to choose the people they associate with, since the population density is so high. As a result, rural people tend to take care of their own problems and resent "outside interference" while urban people are unable to do so, again due to the high population density. It seems to me this is tantamount to a "central/decentralized control" axis.

If I understand Dan's idea that it is better to let the risk assessment of a set of individuals define the affinity groups rather than searching for "latent variables" and then trying to correlate the results to them, I wonder. Dan says "But where the risk perception in question is genuinely distinct from those that formed the disposition indicators, there will be no such endogeneity." How do you know when they are distinct? Is the threat of a free market distinct from the threat of global warming? Or is there a common thread, namely the question of central versus decentralized control? Those who are of the "central control" culture will be prone to see a high risk in both, those who are of the "decentralized control" culture will not.

Along this line, regarding the general problem of "where are the axes and what do they represent?", I did a project about 10 years ago in which I took the number of disagreements between Supreme Court justices for a given year as a measure of the "distance" between them, and then plotted each justice as a point on a piece of paper such that the geometric distance between the justices was as close as possible to the measured "disagreement distance". It can be done on a line, or a plane, or in three dimensions, etc., higher dimensions giving better matches to the disagreement distance. The line is not very interesting, it gives a distribution that more or less matches the commonly held notions of who is "left" and who is "right" on the court. When plotted in two dimensions, there is very significant variation perpendicular to the left/right axis, and I was trying mightily to figure out what outlook or outlooks that dimension represented. The reason I am so interested in Dan Kahan's HE-IC plots is the idea that there are axes in the distance plots that correspond to them. I also tried using the Nolan Chart axes (libertarian/authoritarian and social/economic axes) and the left/right GAL/TAN axes of Bakker (see‎) but haven't been able to clearly identify the other axes.

December 5, 2013 | Unregistered CommenterFrankL


While the political color of the map is undoubtedly split between rural and urban as you describe (the dominance of the Democratic Party in political leadership in the 20 or so largest cities is dramatically more uniform than only a decade or so ago)....

Rural people are under pressure to "get along" despite their differences since they run into and do business with each other every day,

I'm wondering what evidence you have to support that claim other than your anecdotal, seat-of-the pants observations and reasoning. I have lived in urban environments, and I have seen an enormous amount of pressure in those environments to "get along." I have also lived in rural environments (I just moved to one from a city a few months ago), and it seems to me that one could easily make the argument that those who live in rural environments have distinctly less pressure to "get along" as they can (if they choose) live life more independently.

they know everybody, everybody knows them, while urban folks almost have to choose the people they associate with, since the population density is so high.

I"m not sure what that means, or what the underlying logic is. People who live in urban environments know a great deal of people in their communities, and it is necessary for them to cultivate relationships with many of those people. Yes, they might encounter people they don't know more frequently than someone who lives in a rural environment, but how does "urban folks" having to "choose the people they associate with" connect, logically, with the rest of what you were saying? I encounter people who live in rural environments who specifically design their lifestyle to not have much interaction with other people and who are very deliberate in choosing the people they associate with.

As a result, rural people tend to take care of their own problems and resent "outside interference"

But isn't that in direct contradiction to what you were saying earlier, where "rural people" are more dependent on others than urban people?

while urban people are unable to do so, again due to the high population density. It seems to me this is tantamount to a "central/decentralized control" axis.

So this makes a bit of intuitive sense to me, as it does seem to me that people who are more accustomed to solving their problems independently (say, fixing their own appliances, or hunting for their own food) would seem, logically, to be more distrustful of institutional authority. But surely you have seen those outlines which show that if anything, it tends to be that those states with higher %'s of rural populations are more reliant on federal support. It seems to me that the ability and inclination to solve problems on one's own, independently of support from one's community or government, is more a function of one's personal makeup, access to support, and life circumstances than a function of whether one lives in a rural or urban environment.

I have worked in a social service context in a very rural community, and I can tell you that the people I worked with were very appreciative of, and very reliant on, the existence of a "centralized" approach to helping them address their problems.

So again, I'd be curious to know if you have something other than you personal perspective that you think supports the applicability of the generalities you described.

December 5, 2013 | Unregistered CommenterJoshua

@Joshua - Yes, all of my observations were the result of personal experience rather than any sort of scientific study, as yours seem to be, and I am sure my identity protective cognition is coloring my conclusions, as is the case for all of us, and I am trying to see through that. I have lived in northern and southern rural environments, and northern and southern urban environments. You have worked in a social service context, I have worked as a mechanical engineer in satellite applications. We are both giving opinions based on personal experience, probably filtered through our own flawed cognition. We take jobs that utilize our talents which helps create our identities, and those jobs tend to support our identities, we associate with people with similar talents who support and share our identites, etc. etc.

You said that you see much pressure in urban environments to get along, possibly less in rural communities. This is simply counter to my experience, or better, my analysis of my experience. In an urban environment, we deal with a small subset of the population, and we have more power to select that subset. In the rural environments, I was struck by the "small town mentality". As it was once described to me, the good thing about small towns is that everyone knows everyone else's business, the bad thing is that everyone knows everyone else's business, and I came to get an understanding of this. When I moved in, it seemed like a week later, most people in the community knew of me, that I was a "city boy" and an outsider, and not to be trusted on both accounts. When I say that people were accustomed to solve their own problems, I don't mean fixing appliances, I mean they arrive at solutions to social problems using face-to-face interactions and tend to resent "outsiders" who do not have an intimate understanding of the long standiing local history, culture, and relationships, and imposing a cookie-cutter solution, basically assuming an urban environment, where there is none. There were not three Walmarts in walking distance of my home, there was the corner store which served the community, and walking into it you were very likely to see the same people that you do business with, and your competitors. If there is a traffic altercation, it's likely to be with someone you are acquainted with, the same person who loaned you a wood splitter yesterday and "stole" some of your business the day before. I felt much more socially challenged in the rural environment. In the urban environments, I associated with the people I worked with, people with whom I and my family shared interests, and my next door neighbors who were less than a stone's throw away. Other than that, nobody knew me or my business. In a rural community, if your house is burning the people helping you put it out are familiar faces, in an urban community, no. This is what I mean by rural people being more connected. And the nature of the connection is everyday face-to-face, not facebook or twitter. When you talk of rural people being better able to isolate themselves, I just don't see that. I do see people in a rural environment who are uncomfortable with "crowding", in other words having a neighbor who is closer than a cornfield away. But I don't see them as generally more isolated and more able to choose their circle of friends. The human brain is wired to deal with, oh, a couple hundred people on a face-to face basis, not coincidentally about the size of a hunter-gatherer tribe, or a small town. It's impossible to deal in the same way with a few million people in an urban environment. I didn't mean to imply that urban people were less dependent on other, just that the people they depend on are more well known.

What do you see as the reason(s) rural voters tend to vote Republican while urban voters vote Democrat? If rural people are more likely to use assistance, why do those areas vote Republican? Do they vote against their economic interests or are people on assistance less likely to vote? Or something else? I think the red-blue map is important data, and I'm not sure how to fit it into Dan's cultural map.

December 6, 2013 | Unregistered CommenterFrankL


Thanks for that thoughtful response.

I will say that while reading your description, I see some things that partially relate to some of my experiences (I have lived in rural environments in the past as well as now, and in urban environments at different points in my life, also. I have also lived in suburban and exurban environments as well), but I find your generalizations much too broad and categorical to be consistent with my experiences. overall While some of my experiences have been similar to those you describe, I have also had many experiences that are almost opposite to what you describe - when I've been living in both rural and in urban communities.

So not sure where to go with all of that, except to go with the always unsatisfactory agree to disagree....

"What do you see as the reason(s) rural voters tend to vote Republican while urban voters vote Democrat?

I think that the divergent breakdown of party affiliation differences across rural and urban environments is much more a product of demographics, SES status, "affinity groups" etc., than anything specific to population density or other more geographically-based variables. And I think that the entire causal dynamic is far more complex than the factors you describe -- as related to views on centralized vs. decentralized control.

Go to a primarily black rural community and you will find Democratic voters. The county where I currently live is designated rural (it has a population density of 158 per square mile - a quick Google said that an urban designation is characterized by 1,000 per square mile) - and while the racial makeup is 83% white and 6.5% black - it also votes heavily Democratic. In Philadelphia, where I used to live, despite that there are urban communities that have recorded 100% Democratic votes (communities that are overwhelmingly black), there are other communities that vote majority Republican (communities that are majority white).

"If rural people are more likely to use assistance, why do those areas vote Republican? "

I'm not particularly of an opinion that they are more likely to use assistance - but I question whether they are less likely to avail themselves of centralized support systems than urban residents (as a result of living in an urban community).

"Do they vote against their economic interests or are people on assistance less likely to vote?"

I don't quite see how the second question follows as an "or" condition from the first, but I think that generally people of lower income are less likely to vote.

" I think the red-blue map is important data, and I'm not sure how to fit it into Dan's cultural map.""

It seems to me that the red/blue bifurcation of the map - that largely coincides with rural vs. urban communities, fits perfectly with the notion of cultural cognition. People look at controversial issues such as the costs and benefits of a social safety net or gun ownership or allowing people to marry someone of the same sex or ACO2 emissions, and they filter their evidence in accordance with their cultural and social identifications.

And what is particularly interesting to me is how quickly their analyses can do a 180, as we've seen with Republicans and an insurance mandate. That sort of change-over suggest to me that positions that people take on controversial issues has little to do, actually, with the evidence available, or "cultural world views" or characteristics of their "risk perception" - but more to do with, quite simply, their affinity groups/group identification.

December 6, 2013 | Unregistered CommenterJoshua

The rural communities I am familiar with were very white, so that might explain our differences? Also, I agree that a simple "agree to disagree" is unsatisfactory. If neither of us had cultural blinders on, we would not disagree on facts or probabilities about our attempts to predict the future. We would only "agree to disagree" based on our values which would hopefully not be superficial, like "gun control is a fundamental value" or "gun freedom is a fundamental value".

I cannot talk myself out of the idea that there is a central/local control issue with rural white Republican voters, and that its much more "anti-outsider" rather than "anti-race". I see it as a natural human reaction, given their situation, just as urban people are naturally reacting to their environment. The problem is the blindness of each group to the other's situation, replacing understanding with gut ideology which leads to political warfare.

We agree that the red-blue map is saying something. I think it's saying something important. I don't think cultural affinity groups explain it, I think they are a symptom, a description of the difference, not a cause. What causes rural voters to be members of one group, urban voters members of another? Is that even the direction of the cause/effect? How does this fit into Dan's two-dimensional cultural group description or is there a more informative set of axes? Are there more axes? This is what bothers me.

Also re: where is the Tea Party? I know some Tea Party members, and their central issue is also central/decentralized control. (Honestly, the portrayal of the mainstream media is skewed.) That would put them in the individualist side of the map. I guess I am not sure where they fit on the heirarchical/egalitarian axis because I am not completely clear on the meaning of this axis. Egalitarian means equality, but there is equality of opportunity with acceptance of an outcome which produces a heirarchy, and there is equality of outcome. I'm not clear on which kind of equality we are talking about here, do you?

December 6, 2013 | Unregistered CommenterFrankL


"I am not sure where they fit on the heirarchical/egalitarian axis because I am not completely clear on the meaning of this axis."

See here for some potted summaries:

The one for 'hierarchical' is the following:

A “high” grid way of life organizes itself through pervasive and stratified “role differentiation” (Gross & Rayner 1985, p. 6). Goods and offices, duties and entitlements, are all “distributed on the basis of explicit public social classifications such as sex, color, . . . a bureaucratic office, descent in a senior clan or lineage, or point of progression through an age-grade system” (ibid, p. 6). It thus conduces to a “hierarchic” worldview that disposes people to “devote a great deal of attention to maintaining” the rank-based “constraints” that underwrite “their own position and interests” (Rayner 1990, p. 87).

Part 2 may give you a hint about the supplementary question, too!

December 7, 2013 | Unregistered CommenterNiV

I have been interpreting "heirarchical" as more or less comfortable with outcome-based heirarchies rather than imposed heirarchies. An outcome based heirarchy for example would be the rich/poor division in a free market economy (without inheritance), the heirarchy of football teams, the heirarchy of politicians in a democracy, any system where it is what you do that determines your position. An imposed heirarchy would be racism, the feudal nobility system, any system where, no matter what you do, your position is roughly determined. The heirarchical/egalitarian description you have given seems to be one of imposed heirarchy rather than outcome-based heirarchy.

Egalitarian is opposed to heirarchy. There is equality of opportunity, and there is imposed equality, equality of outcome. For example, in a free market, there is equal opportunity to acquire wealth, which establishes a heirarchy based on wealth. Redistribution, voluntary or not, aims to reduce or eliminate that heirarchy.

It seems to me that the egalitarian axis favors imposed equality just as the heirarchical side favors imposed heirarchy, in which case I cannot locate myself on the heirarchy/egalitarian axis. By my value system, I am against both imposed heirarchies and imposed equality, in favor of equality of opportunity, and relatively acceptive of the resulting outcome-based heirarchies.

This is a "how things ought to be" rather than a "how things are" statement. In the real world there are no, nor can there be, any purely outcome-based heirarchies. In any case, there is pressure for those high in any heirarchy to conserve their position by establishing an increasingly imposed heirarchy, while there is pressure for those low in any heirarchy to establish an increasingly imposed equality. I place a high value on a social system which resists both impulses in theory, and is "intellegent" enough to largely neutralize them in practice.

What do you see as my location on the heirarchy/egalitarian axis? Is there a missing axis?

December 7, 2013 | Unregistered CommenterFrankL


I have to agree that I'm not entirely sure of the general meaning of the hierarchical-egalitarian axis.

In terms of the climate debate, I personally classify as 'hierarchical' those folks who think only 'experts' and 'publishing climate scientists' are qualified to speak against the consensus of experts, that you have to be a qualified and certified scientist for your opinion to count, that results are graded by whether they have been 'peer-reviewed' and published in some journal, and so on. What Dan sometimes calls the ways people know who knows what about science.

Whereas 'egalitarians' are people who believe *anyone* can offer an opinion, anyone can offer arguments and do science, and it doesn't matter in the least whether they are approved or certified or tenured, or their work is peer-reviewed. They are not climbing the academic career ladder. They're not playing for power and influence in the committees and bureaucracies. They regard themselves as playing on a level playing field along with the professors and IPCC and climate scientists and congressmen and government ministers and global UN bureaucrats.
You could say, that's about the ways people know what scientific arguments and evidence should look like.

I'm not sure if that fits Dan's categorisation, since his results seem to show hierarchical-individualists as the sceptics, but it's the same as what I was saying about his 'interpretative communities' axes. I suspect people are hierarchical along some axes while being egalitarian along others, and the metric measures a selection that happens to be correlated with other dispositions. I wouldn't pay too much attention to the precise interpretation of the term - it's just an axis you can use to classify people.

But if Dan would be willing to point you to the questions he uses to measure it and his scoring system, you could find out exactly where you are on the axis.

December 8, 2013 | Unregistered CommenterNiV

I would be VERY interested in a clarification, but, being rather new to this blog, I'm not sure how to ask for it.

Regarding the heirarchical/egalitarian approach to climate change data interpretation, I think peer-reviewed scientists are more likely to make a detailed rational argument on the subject, but are subject to protective cognition like anyone else. I tend to believe dispassionate rational arguments which are free of policy recommendations, whether they are in the majority or the minority or peer-reviewed or whatever. Of course anyone is entitled to an opinion, but opinions of the uninformed are less likely to be worth anything.

Being of a libertarian bent, I would value the opinion of a free market on climate change. When insurance companies, freely able to set insurance rates, hedge against climate change, then I really sit up and take notice, because they don't care about left or right or green or whatever, they just want to get the probabilities right so they can make money, and would be very likely to see culturally or politically distorted cognition as a threat to their bottom line. I would be least suspicious of studies funded by insurance companies operating in a relatively free market on the subject of climate change. Studies funded ultimately by politicians (i.e governments or the intergovernmental IPCC) are higher on my suspicion list, because they have agendas which may be damaged by the truth and/or constituents who expect their cultural identities to be protected in return for their votes and money.

December 9, 2013 | Unregistered CommenterFrankL


If you just ask Dan directly, I've always found him keen to help people understand his ideas better. As with all blogs you do sometimes have to be patient about the communications lag.

It depends what you mean by peer review. Journal peer review is (when it's working properly) mostly just a quick editorial sanity check to make sure a paper is worth people's time to read. Is it topical, interesting, novel, competently done, described in enough detail to replicate, and does it have sufficient strength of evidence to justify the claims being made? It's generally a couple of days' unpaid work from a friendly volunteer without full access to the data/code/lab books/equipment/etc.

Actual scientific peer review occurs after publication, when the rest of the community tries to knock it down. Other scientists have to replicate it, check it, debunk it, generalise and extend it, criticise it, unify it with other work, simplify it, project its implications and test those as well, find and explore the limits of its validity, and so on. In many areas of science, half of all peer reviewed papers get overturned within a few years of publication. The ones that survive become part of the canon, and eventually enter the textbooks.

People commonly misunderstand how the system is supposed to work. Peer review is rarely any kind of gold standard, and it most certainly isn't final approval. And it is vulnerable to institutional bias, when one paradigm gets control of all the senior positions in a field that the journals go to. It's not just climate science - one of the more famous cases was when the astrophysics community led by Eddington and Einstein kept black hole research locked out for decades. Eddington thought them "absurd" and while several big names in physics privately agreed that the mathematics was valid, nobody would go up against Eddington's clique in public. It happens. Scientists are human.

But nowadays you can do the scientific process outside the journals, and it's probably going to revolutionise science, eventually. Yes, a blog post on its own is probably not worth very much. But a blog post that is open to hostile comments, that a lot of well-informed people have tried to shoot down and failed, is worth paying more attention to. Blogs get a reputation for how careful they are about what they put up, and how technical they're prepared to get, and attract bigger or smaller audiences on that basis. They work much the same way that scientific journals do.

You have to assess opinions on their content, and on whether you can find any good counter-arguments to debunk them. The person saying it can sometimes tell you how carefully you need to check, and they'll usually be more careful if they know that people are going to, but still it is the arguments that matter in science rather than the arguers.

Regarding insurance companies, bear in mind that an insurance company profits from their customers being more nervous about disasters than they need to be. It increases demand. Internal insurance company reports have to aim to get the numbers right, but external reports issued to their customers ought to be considered 'marketing material'.

Personally, I'm more inclined to trust people who know that what they publish is going to get scrutinised and checked over in detail by hostile auditors. Adversarial systems are not perfect, but they can deal with a lot of the problems. That's why we so often use them in areas where it is important to get it right.

December 9, 2013 | Unregistered CommenterNiV


I finally found a link to the questions here:

Not at all what I was expecting given the names assigned to the axes!
I wouldn't expect those questions to measure a hierarchical tendency (as defined above) at all. But hey. It's social sciences!

December 9, 2013 | Unregistered CommenterNiV

@Niv - good clariification on the meaning of "peer review", thanks. I have heard of the black hole controversy and found it interesting. I think it was Chandrasekhar who developed the theory, and he kept trying and trying to convince Eddington whom he respected, but could not. Finally Dirac told Chandrasekhar that his theory was correct, but Eddington would never concede, and to chill out, which he did.

Regarding the insurance companies, good point, but there are market forces opposing insurance companies who exaggerate risk to the public. A single insurance company cannot advertise exaggerated risk and expect to profit by increased rates, the public may well believe them, but will then go to their competitors whose rates have not increased. All insurance companies must exaggerate the risk raise their rates for this to work. But then the market will reward a defector who lowers their rates to their "proper" value, and will attract new entrants who offer the lower "proper" rates. The problem is, of course, that there is no such free market.

I agree with you that peer-review, formal or not, is a very valuable way to ferret out the truth. When an author gets it clearly wrong, that author will suffer loss of face and perhaps income, just as will the insurance company who gets the risk wrong. But the insurance companies tend to be funded by people looking for insurance, not for support of their cultural affinity group. I'm not sure that is the case for funding of many scientists.

Regarding the link, I have problems with the questions for equality/heirarchy. Assuming these questions determine one's position on along a single dimension, they are flawed. Perhaps they are all "red meat" questions on purpose, basically asking "which kind of idiot are you?", but shouldn't there be room for responses from people who have given even just a little thought to the questions?. Also, there is a missing dimension of centrally imposed solutions versus unimposed, or emergent solutions. Lumping anti-redistributionists in with racists, sexists and homophobes as high-heirarchy is a false connection. The presumption is that anti-redistributionists, like racists, sexists, and homophobes, always wish to lock in, to impose and conserve a heirarchy, in this case the rich/poor, which is false.

There seems to be an idea that male values are heirarchical while female are not or that having male values is equivalent to support of a sexist heirarchy, and that is also a false connection. The questions seem to break people down into the butt-head right and the the butt-head left much more than they examine people's attitudes towards heirarchy or equality.

December 9, 2013 | Unregistered CommenterFrankL

I don't think the questions are really testing for racists/sexists/homophobes. They're testing for people who believe in equality of outcome versus equality of opportunity - whether economic equality or as part of identity politics.

There's a strong thread of opinion on the right that measures to promote equality of outcome - such as affirmative action, quotas, hate laws, equal pay, and so on - have gone too far with regard to equality of opportunity, giving politically privileged categories extra rights over and above the rest of the population. They object on egalitarian grounds, that unequal opportunity is being created in this attempt to achieve equal outcome, and that the formerly-dominant roles and beliefs are now being persecuted.

Having seen people complain extensively about that sort of thing, I can just imagine how they would interpret and answer those questions. It's not actually measuring hierarchical or authoritarian thinking, it's measuring agreement/disagreement with outcome-equality politics.

December 9, 2013 | Unregistered CommenterNiV

@FrankL & @Niv: Don't have time to joing meaningfully in conversation, unfortunately. But there's discussion of the *constructs* that the scales are intended to measure here: Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Also some more only way to judge the value of measures like these is by what one is able to do with them. It would be silly to pretend there are things out there somewhere that correspond to the measures; the meaures are tools to enable explanation, prediction, and prescription. Find a better tool-- & I will happily use it! (Indeed, you can see I am experimenting here-- & I've happily used "left right" measures too).

December 10, 2013 | Unregistered Commenterdmk38

@NiV & @FrankL

1. Ach! I can't believe I forgot to refer you to ...

Douglas, M. Being Fair to Hierarchists. University of Pennsylvania Law Review 151, 1349-1370 (2003)!

Douglas definitely didn't think we were measuring the right thing. But I'm less interested in measuring what Mary Douglas might have meant (she wasn't always clear, even though she was quite often inspired & inspiring) than what seems to make sense & works.

(BTW, seems very wrong to me to assume that someone who scores high on our hierarchy scale is a racist or a homophobe!)

2. On the "dimensions"

I used factor analysis.

That's not akin to cluster analysis or any other strategy for determining "likeness" by measuring the proximity of observations to one another in multi-dimensional space (as entertaining as that can be -- "Minkowski space," "Manhattan distances" & all that).

Factor analysis focuses entirely on the covariances of the "indicators," which of course are specified by the analyst. Essentially, it's like doing a regression where one has independent variables but can't see "y"! There are different algorithms, but all try to identify how many "latent" or unobserved variables have to be posited to maximize the variance explained in values of the indicators when the indicators are arranged into different combinations and assigned linear beta, maximum likelihood or some other type of regression coefficient.

For a great discussion, check out Rummel's Understanding Factor Analysis. Actually, Rummel is a pretty fascinating character!

As for what to call the factors ... well, that's just a literary matter. If one is trying to gauge what the factors imply, the key is obviously the loadings

The two factors that emerged from the analysis of the risk perception seemed pretty familiar and satisfying to me.

What would you propose to call them if you wanted to convey their character more accurately?

December 11, 2013 | Registered CommenterDan Kahan

Ooh, I do like that Mary Douglas paper!

"Their responses are coded to favor bigotry, sexism, and racism. Somebody here doesn't like hierarchists."

I agree that she's not entirely clear about what the different terms mean. The meaning of the categories seems to shift subtly as she sets out the history of the subject, possibly because there has been a historical shift?

I also agree with her that the hierarchical-egalitarian theory (interpreted in group-grid terms) ought to predict the opposite outcome on gun control. Guns represent power. The hierarchists should want the use of guns regulated, restricted to those roles that need them (police, army, etc.), while the egalitarians should believe that (more or less) everyone should have access to them, partly as a safeguard against too much power being in the hands of too few people. The hierarchists should believe people are basically bad unless they are made to behave by the state/society, while the egalitarians should be the ones who believe people are basically good, and ought to be allowed to get on with it with minimal interference from the state. The categories seem to have got switched around.

As I said, I think what the H-E questions are measuring in modern society is the opportunity-equality / outcome-equality difference in values. Take for example the first question HEQUAL. "We have gone too far in pushing equal rights in this country." Note, that's not asking whether it was wrong to have pushed for equal rights, or asking whether equal rights are a good or bad thing - it's asking whether we have gone too far, which is the argument between the outcome-equality people who don't think we've gone far enough yet, and the opportunity-equality people who think that society was absolutely right to give 'minority' groups a more equal opportunity, but that the efforts now to achieve closer outcome-equality by giving them better opportunities than the former 'majority' are unequal, unjust, and damaging.

The same question is then basically repeated for race, sex, wealth, gender, and sexuality. The only question that is not a straight repeat or interpretable as such is HDIVERS, which I think in context would be interpreted in terms of the 'multicultural' war on the majority culture, but is actually a more general question that goes both ways. How many people, though, do you think would interpret it as asking whether it's old fashioned and wrong to think a culture of freedom and equality is any better than a racist, sexist, totalitarian culture? The irony is outstanding. :-)

Anyway, the result is quite predictable from the standpoint of an opportunity-egalitarian. The story goes that the outcome-egalitarian sees an inequality of power in society and wishes to correct it, but society will not do so on its own and so must be made to, which means the state needs ever-greater power to make them, and you naturally need to take away the power from the people who would stop them. Outcome-equality is a highly unstable state of affairs, and needs constant intervention to enforce. Thus outcome-egalitarians are in favour of regulation to bring it about and obedience to the centralised state power that enforces it. (The original "left-wingers" were Robespierre's mob in the French Revolution, of course, and the 20th century history of outcome-egalitarianism tells much the same story.) Naturally they would support the regulation of guns by the state.

You don't have to accept that as a true analysis of the situation - just that it's how an opportunity-egalitarian might interpret it. It's a point of view.

The problem, I think, was that the group-grid formulation originally identified the grid axis as the difference between the 'regimented' and the 'fluid' society, but made the mistake of calling the opposing position to the hierarchy "egalitarian". This got re-interpreted by other (predominantly left-leaning) academics as the to them more familiar concept of outcome-egalitarianism, opposed by the differential power structure that results from elitism (and also opportunity-egalitarianism, which the left often conflates with elitism). Questions that were probably originally intended to measure this outcome-egalitarian/elitist axis then picked up on a genuine polarisation across society between the opportunity-egalitarians and outcome-egalitarians, which is indeed correlated with a bunch of other attitudes and beliefs. It's confusing because the 'egalitarians' thus-measured are in some ways more in favour of a regulated and regimented society than the 'hierarchists', which leads to all sorts of difficulty when you try to interpret/predict results theoretically. Or at least, I found it confusing.

Regarding better names for the IC axes, I really don't know. I'm not at all sure what the 'social deviancy' axis is really measuring, particularly as global warming risk perception seems to have a significant negative dollop of it. Is that saying that lack of concern about global warming is a form of social deviancy? It would probably help if we had more than 4(?) data points on which to judge.

December 12, 2013 | Unregistered CommenterNiV


You might be right aboiut what has happened w/ Douglas's scheme.

But as I said, the only thing that matters for me is measurement, & in 2 senses. The first is -- are the scales genuinely measuring some real thing in people? The second is -- is the thing being measured something that helps explain, predict & prescribe?

Attempts to operationalize a conception of group-grid "more faithful" to MD have failed mainly in the first sense. The descendents of MD generally practice the social science equivalence of astrology.

As for the second -- well, as I have said, I believe the utility of our results speak for themselves. You tell me if what they are saying is useful. That's the only test.

The test isn't whether the items in a scale of an aesthetic appeal to those who might identify with the constructs being measured -- or with some other construct altogether. That is the mistake MD is making in her response to us.

It is also a mistake that I think FrankL is making. He, like MD, sees the items as eliciting attitudes that he or maybe others would view as "racist" or "homophobic" & objects to the orientation in question as being characterized in that way.

I am not characterizing the orientation in question as any of those things. These items seem to reflect how individuals who look like "hierarchists" -- based on various related identifying characteristics & expected sorts of attitudes and political positions -- in fact talk. We conducted focus groups, e.g., in the course of developing the items where white males who were anti-gun control said things like this. When we use them in scales, they cohere in a way that suggests they are measuring something in common; that thing corresponds to other matters that one would expect it to correlate with (ideology, identifying characteristics & such); and finally performs well, in the pragmatic sense I described.

That is the way to judge the validity of the scale.

December 15, 2013 | Registered CommenterDan Kahan

@Dan Kahan

But as I said, the only thing that matters for me is measurement, & in 2 senses. The first is -- are the scales genuinely measuring some real thing in people? The second is -- is the thing being measured something that helps explain, predict & prescribe?

The second is very testable - no problem. But in the first, how do you test what is "real"? What is the quantitative measure of the failure of MD?

It is also a mistake that I think FrankL is making. He, like MD, sees the items as eliciting attitudes that he or maybe others would view as "racist" or "homophobic" & objects to the orientation in question as being characterized in that way.

No, I have no problem with questions that elicit racist or homophobic attitudes. Also, I'm sure that anti-redistributionist attitudes correlate with racist and homophobic attitudes, and so there is predictive power there. Anti-affirmative action correlates with racism, sure. But if these conclusions are ever translated into policy, how do we avoid making the mistake that they flow from some common latent variable?

It's a matter of decentralized control and opportunity-egalitarianism vs. central control and outcome-egalitarianism, which has nothing to do with racism/homophobia. (see discussions with NiV and Joshua)

On a more mathematical level, I am reading Rummel right now, and its good. One question I have is that Rummel seems to say that the factors are posited by the researcher, where you say the factors emerged from the analysis. Can you clarify my confusion? - Thanks.

December 15, 2013 | Unregistered CommenterFrankL


1. On factor analysis

The factors are definitely "extracted" from the data. That is, the latent variables that best explain the covariance of the indicators is analytically derived consistent with whatever variant of the general linear model is being employed.

To the extent that someone is talkiing about factors "emerging" vs. being "posited" -- that's likely to have to do with the theory & aims that are motivating someone to *do* factor analysis.

Oftentimes, factor analysis is undertaken in an exploratory frame of mind. "Well, I've got a lot of data here ... Let's see if the factors that *emerge* in a factor analysis help me to understand what the relationship is between them..."

Other times, one is trying to validate a measure one is constructing. "I *posit* that these items will measure 'hierararchy-egalitarianism' & 'communitarianism-individualism.' Let's do factor analysis and confirm that the items are loading appropriately on two discrete factors."

2. On whether a "worldview" dispositoin scale is measuring a "real thing in[side] people":

So this is related to the last point. The dispositions are latent or unobserved. I would like to form a measure of them using observable indicators.

Well, I can't just say: "here are my indicators! Whatever someone says in response to these items *is* individualism" etc.

Or I could just say that. But I'd be making a fool of myself if it turned out that the indicators were all not correlated with each other -- or not correlated in the way & to the degree they ought to be if they are genuinely measuring *something*, If they don't display that covariance they necessarly aren't measuring anything that's really inside of the people whose worldviews I'm purporting to measure.

So we need to figure out what sorts of statistical tests will "vouch for" (not really how it is conventionally put but screw convention) the indicators as measures of the worldview.

The voucnhing is of a couple of different forms.

One is "reliability": The covariance should be sufficiently high to enable acceptably precise measurement. Cronbach's alpha is often used for this. It is coefficent between 0 & 1.0 that reflects inter-item correlation & number of items (more items help to compensate for lower inter-item correlation, basically).

Some people say think there is a "cutoff" of 0.7 or some such but the real point is that your alpha will constrain how much variance you'll be able to detect between your latent variable & other variables of interest. If I have a low alpha, that means the imprecision of my measurement of the latent variable will doom me to find only weak correlations between my "worldview" measure & risk perceptions. An alpha of 0.5 makes a scale pretty useless; you can limp along okay w/ 0.6 etc.

The other sort of vouching is "validity": that thing being reliably measured by my indicators better be what I think it is!

Factor analysis is a kind of validator -- I shoudl expect my hierarchy-egalitarian items to be discreet from my individualsim-egaltiarian tiems etc if they are "really" measuring two different dispositoins.

There is also "external" validation: "If this *is* hierarchy, I'd expect it to have a modest positive correlation w/ this & a negative one witht that" etc.

Most of the attempts to operationalize MD's "group-grid" w/ attitiudinal indicators of lartent worldview dispsoitions generated scales that were neither reliable nor valid.

I discuss this in the "Cultural Cognitoin as a Conception" essay linked a couple of posts back.

But most contemprorary "group-grid" types are really hostile to measurement of any sort. It bothers them that they would be expected to make claims that one could test; if one makes claims & corroborates them by testing, that bothers them too.

They have succumbed to pseudoscience

December 15, 2013 | Registered CommenterDan Kahan

"The first is -- are the scales genuinely measuring some real thing in people? The second is -- is the thing being measured something that helps explain, predict & prescribe?"

On the first, I'd agree the H-E scale is definitely measuring *something*. Based on my own perspective I'd interpret it as opportunity-egalitarianism versus outcome-egalitarianism, but that's a hypothesis that needs to be properly tested.

On the second, I'm not so sure. Explanations and predictions are going to be difficult if we're operating on the basis that it measures 'grid'-type hierarchical thinking when it doesn't. You have noted that various beliefs tend to go together, but have you ever explained why? Why, for example, should 'hierarchical' thinking result in assigning low risks to environmental issues? Is it any easier to explain opposition to environmental regulation by connecting opportunity-egalitarianism to (say) free market thinking, which opposes regulation on principle?

There are things that your research is useful for, I'm sure, but I can't think of any of them that flow from the labels applied to the axes.

"Attempts to operationalize a conception of group-grid "more faithful" to MD have failed mainly in the first sense."

OK. Why? Do we know? Is it, for example, that people are not consistently of one type or the other on every issue, but are high-grid on some issues and low-grid on others? If so, what issues? Is there any pattern? Or is it that there are different distinct *types* of 'grid' thinking and people might be high-grid with regard to (say) cooperative-grid organisation but low-grid with regard to the coercive-grid type of grid thinking?

From what Mary wrote, I got the impression that the attempts to operationalize the grid concept failed because few if any of the questionnaires were asking questions that would actually test for 'grid' thinking. Dake seems to have led the field down the wrong path with his initial set of questions. An exception she cites is the questions asked by Gunnar Grendstad, which seem far more directly related to the idea, although I'd say they only explored a narrow aspect of it. For example, you could also ask about whether people should trust experts, whether a system of qualifications and certifications was useful for tradesmen, whether students work better if they are 'graded' and rewarded with 'ranks' (like the belt colours given out in martial arts), whether senior managers should mix socially with the workforce or keep a degree of distance and separation, whether blue-collar workers are on a different social level to white-collar workers, and so on.

But if there is an actual reason why the group-grid categorisation doesn't work, I think it would be interesting to know why, and useful to drop terms descending from that theory, like 'hierarchical', as confusing.

"There is also "external" validation: "If this *is* hierarchy, I'd expect it to have a modest positive correlation w/ this & a negative one with that" etc."

Yes, that was Mary's point. If this *is* hierarchy, I'd expect hierarchists to be in favour of gun control and egalitarians to be in favour of gun liberalisation. Hierarchists would think guns ought to be restricted to those roles that require them (police, army, guard, pest control), which should only be open to those with the qualifications and background checks to ensure they are a suitable person (social respectability, seriousness), and that people should not be able to change roles (i.e. get gun gualifications and/or guns) except via approved channels. The consequences of not doing things this way are crime and social chaos.

Gun control advocates are absolute classic 'hierarchists' in their style of thinking, and yet your measure has them down as 'egalitarians'. Doesn't that 'invalidate' the measure?

December 16, 2013 | Unregistered CommenterNiV

@Dan - ok, good. I had a wrong impression (still working on understanding factor analysis). The "HE" and "IC" axes then, are axes that really mean something for the given set of questions, and the names were applied by you?

So there is only one potential source of bias or error, and that is the questions themselves. If you fail to ask questions which draw a particular distinction, then factor analysis will never detect that distinction. As NiV said: "I got the impression that the attempts to operationalize the grid concept failed because few if any of the questionnaires were asking questions that would actually test for 'grid' thinking."

The bottom line is that the question set posed by a biased researcher will reflect only the distinctions that the researcher cares to make, that are ID-protective, and all researchers are biased to some extent. So what is the solution?

One solution is to combine the question sets generated by a group of people who disagree with each other. That's not as easy as it sounds. Who selects that group? Not the "main" researcher, that introduces bias. It would be great if we could get each member of congress to construct a question set, because that would, practically by definition, define the important questions. Maybe we could take many attempts at factoring, Dan's cultural space, MD's group-grid space, Nolan's libertarian/authoritarian social/economic space, etc. etc., try to understand them, and form a question set that tries to detect the distinctions implied by them. Then it would only have biases common to "researcher types".

I wonder if the number of questions has any effect? If you had three liberals and one conservative generating 10 questions each, and a 50-50 liberal/conservative split in the people answering the questions, would there be a liberal bias in the resulting factor analysis? Hmm - maybe not.

December 17, 2013 | Unregistered CommenterFrankL


1. Yes, the dimension labels are ones I imposed. In the case of HE & IC, I started with the labels-- the constructs --& then developed scales that I was satisfied measured them.

2. On agreement-disagreement: The items necessarily refelect propositions on which people who vary in the relevant attitude being msasured by the scale disagree on. Othwerise, it wouldn't be a reliable scale (there wouldn't be the requisite covariances).

3. Sure, "researcher bias" can result in scales/factors that aren't very good. But the punishment would be the failure of the factors to explain, predict, or prescribe as well as some alternative. One can't will what one is is measuring to work as predictors in empirical studies (nor can will items that reflect a false picture of what's inside people to form a valid scale). Some unbiased person should come up w/ something better than what I have-- I'd adopt it in a second, of course!

4. MD's constructs have stubbornly resisted scaling, either b/c she is not clear what she is talking about, or what people think she means turns out to be something that doesn't exist. We've had success our way, though. For another approach, check out H. Jenkinis-Smith's alternative "cultural worldview" scales. Or here if you can't download article (which even I w/ Yale University account can't; rememind me never to submit article to that journal!)

December 19, 2013 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>