I was starting to formulate a contribution to some of the great points made in discussion of the post on Q&A on "cultural cogniton" scales" & figured I might as well post the response. I encourage others to read the comments--you'll definitely learn more from them than from what I'm saying here, but maybe a marginal bit more still if you read my contribution in addition to those reflections. And almost certainly more still if others are moved by what I have to say here to refine and extend the arguments that were being presented in there. Likely too it would make sense for the discussion to continue in comments to this post, if there is interest in continuing.
1. Whence predispositions, and the revision of them
How does this theory then explain the change from one group identity to another? You don't argue that such change doesn't occur, I see, since you say that there's "no reason why individuals can't shift & change w/ respect to them" -- but why isn't there such a reason, since you've given a good phenomenological description of the group pressures brought to bear on individuals to keep them in the herd, so to speak?
I don't really know how people form or why they change the sorts of affinity-group commitments that will result in sorts of dispositions we can measure w/ the cultural worldview scales. My guess is that the answer is the same as one that one would give about why people form & change the sorts of orientations that are connected to religious identifications & ideological or political ones: social influences of various sorts, most importantly family & immediate community growing up; some possibility of realignment upon exposure at an impressionable period of life (more typically college age than adolescence or earlier) to new perspectives & new, compelling sources of affinity; thereafter usually nothing of interest, & lots of noise, but maybe some traumatic life experience etc.
Question I'd put back is: why is this important given what I am trying to do? I want to explain, predict, and formulate constructive prescriptions relating to conflict over science relevant to individual & collective decisionmaking. Knowing that the predispositions in question are important to that means it is important to be able to measure them. But it doesn't mean, necessarily, that I need a good account of whence the predispositions, or of change -- so long as I can be confident (as I am) that they are relatively stable across the population.
I suppose someone could say, "you should have a theory of the “whence & reformation of” predispositions b/c you might then be able to identify strategies for shaping them as a means of averting conflict/confusion over science" etc. But I find that proposition (a) implausible (I think I know enough to know that regulating formation of such affinities is probably not genuinely feasible) & more importantly (to me) (b) a moral/political nonstarter: in a liberal society, it is not appropriate to make formation of people's values & self-defining affinities a conscious object of govt action. On the contrary, it is one of the major aims of the "political science of democracy" (in Tocqueville's sense) to figure out how to make it possible for a community of diverse citizens to realize their common interest in knowing what's known without interfering with their diversity.
2. On change in how groups with particular predispositions engage or assess risks
And a related question would be: how do the group perceptions of risk themselves change over time? Ruling out mystical or telepathic bonds between group members, how does a change get started, who starts it, and how or where do those starters derive their perception of risk? (Consider, e.g., nuclear power.)
There is an account of this in "the theory."
The "cultural cognition thesis" says that "culture is prior" -- cognitively speaking --" to facts." That is, individuals can be expected to engage information in a manner that conforms understanding of facts to conclusions the cultural meanings of which are affirming to their cultural identities.
So when a putative risk source -- say, climate change or guns or HPV or nuclear or cigarettes-- becomes infused with antagonistic meanings, “pouring more information” on the conflagration won’t staunch it; it will likely only enflame.
Instead, one must do something that alters the meanings, so that positions are no longer seen as uniquely tied to cultural identities. At that point, people will not face the same psychic pressure that can induce them (all the more so when they are disposed to engage in analytical, reflective engagement with information!) to reject scientific evidence on any position in a closed-minded fashion.
Will groups change their minds, then? Likely someone will; or really, likely there will be convergence among persons with diverse views, since like all members of a liberal market society they share faculties for reliably recognizing the best available scientific evidence, and at that point those faculties no longer will be distorted or disabled by the sort of noise or pollution created by antagonistic cultural meanings.
Examples? For ones in the world, consider discussions (of cigarettes, of abortion in France, of air pollution in US, etc.) in these papers:
Fear of Democracy: A Cultural Evaluation of Sunstein on Risk, 119 Harv. L. Rev.1071 (2006) (with Paul Slovic, John Gastil & Donald Braman)
For an experimental “model” of this process, see our paper on geoengineering & the “two-channel” science communication strategy:
And for more still on how knowing why there is cultural conflict can help to fashion strategies that dispel sources of conflict & enable convergence, see
3. What about the “objective reality of risk” as opposed to the cultural cognition of it?
These questions themselves derive from a sense I have that the group-identity theory of risk perception is not wrong but incomplete, and the area in which it's incomplete is of major importance in addressing any theory of communication to do with risk -- that area is the objective reality of risk, as determined not by group adherence, and not by authority (even the authority of a science establishment), but rather by evidence and reason.
To start, of course the theory is “incomplete”; anyone who thinks that any theory ever is “complete” misunderstands science’s way of knowing! Also misunderstands something much more mundane—the limited ambition of what the ‘cultural cognition’ framework aspires to, which is a more edifying and empowering understanding of the “science communication problem,” which I think one can have w/o having much to say about many things of importance.
But the “theory” as it is does have a position, or least an attitude, about the “reality” of the knowledge confusion over which is the focus of the “science communication problem.” The essence of the attitude comes down to this:
a. Science’s way of knowing—which treats as entitled to assent (and even that only provisionally) conclusions based on valid inference from valid empirical observation—is the only valid way to know the sorts of things that admit of this form of inquiry. (The idea that things that don’t admit of this form of inquiry can’t be addressed in a meaningful way at all is an entirely different claim and certainly not anything that is necessary for treating science’s way of knowing as authoritative within the domain of the empirically observable; personally, I find the claim annoyingly scholastic, and the people who make it simply annoying.)
b. People, individually & collectively, will be better off if they rely on the best available scientific evidence to guide decisions that depend on empirical assumptions or premises relating to how the world (including the social world) works.
c. In the US & other liberal democratic market societies—the imperfect instantiations of the Liberal Republic of Science as a political regime—people of all cultural outlooks in fact accept that science’s way of knowing is authoriative in this sense & also very much want to be guided by it in the way just specified.
d. Those who accept the authority of science & who want to be guided by it will necessarily have to accept as known by science much much more than they could ever hope to comprehend in a meaningful sense themselves. Thus their prospects for achieving their ends in these regards depends on their forming a reliable ability to recognize what’s known to science. The citizens of the Liberal Republic of Science have indeed developed this faculty (and it is one that is very much a faculty that consists in the exercise of reason; it is an indispensable element of “rationality” to be able reliably to recognize who knows what about what).
f. The “science communication problem” is a consequence of conditions that disable the reliable exercise of this faculty. Those conditions involve the entanglement of empirical propositions with antagonistic cultural meanings – a state that interferes with the normal convergence of the members of culturally diverse citizens of the Liberal Republic of Science on what is known to science.