A thoughtful person who had read some CCP studies asked me a really good question about the relationship between “in group” dynamics and cultural cognition.
The behavioral and cognitive influences of being affiliated with one group—and unaffiliated with another, competing one—have been a central focus of social psychology for decades. This research pervasively informs our study of cultural cognition.
But neither I nor my collaborators have offereed a focused and systematic account that situates the mechanisms we are observing in relation to that more general body of work. We should do that. My response to the query gestures toward such an account.
Here is the question:
I've been thinking about [the studies and our previous correspondence], and perhaps a simple 'in-group/out-group' model might explain a lot. The starting point is that the problem is so complicated that no layman is going to master the details in their spare time. Most people who work on it full time only understand a part of it! I'm certainly in the latter category. So people reach their conclusions based on advice that feels right. . . .
[F]olks heavily weight information by who delivers that information. At first I thought the selectivity was a symptom - e.g. listen to messages you want to hear. But listening to people you trust sounds a lot more believable, and a lot less evil. I do it myself. . . .
Do I have this right? Or am I off in the weeds?
The interpretation [you] propose-- that the cultural cognition reflects tendency of ordinary people to weight the members of some important "in group" when forming assessments of what science knows -- seems right to me. But I think I would want to add more specificity to it, both to make it more reliable in explaining or predicting "who believes what about what" and to help assess how we should feel about this dynamic.
Here are some reactions:
1. The impact of "in group" dynamics on belief and attitude formation is known to be very substantial. But it is also known to comprise many diverse mechanisms.
Some are, essentially, "informational." E.g, people might be exposed disproportionately to the views of those with whom they have the most contact, and so, if they are effectively treating the views of others as "evidence" of what is true, will end up with a sample biased toward crediting the position of others who share their views.
Others are "social." Individuals might be unconsciously motivated to form views that fit the ones held by others with whom they have important connections in order to avoid the reputational consequences of holding "deviant" opinions. This is identity-protective cognition.
Indeed, there can be an interaction between these influences. E.g., individuals might stifle expression of "deviant" views in order to avoid reputational consequences, thereby denying others in the group evidence from which they might infer that both that the dominant view is incorrect and that they will not be judged negatively for holding the alternative position.
2. There is also the question of which "in groups" matter.
In "lab" settings, one can generate "in group" effects in completely contrived & artificial ways (by making participants where different colored "badges," e.g.).
But outside the lab, things can't be so plastic; we are all members of so many "in groups" (graduates of particular universities, residents of particular cities, fans of particular athletic teams, members of professions, etc.) that the "in group" effect would get washed out by noise if all groups matter in all contexts for all things to the same extent!
3. The "cultural cognition" framework, then, tries to be specific on these matters.
Using a theory associated with anthropologist Mary Douglas & political scientist Aaron Wildavsky, it tries to specify what the relevant in-group affiliation is & what the mechanisms are through which it influences the formation of perceptions of risk and like fact, at least among ordinary members of the public.
The "cultural worldview" scales are a means of measuring the degree of affinity to groups that are believed to be the ones of consequence. We use experiments to test hypotheses about the diverse mechanisms that connect membership in those groups to risk perceptions.
4. I’m confident that the mechanisms we identify with cultural cogntion make both an informational and a social contribution to individual’s apprehension of decision-relevant science.
In fact, I think the informational contiribution is likely of foundational importance. Like you say, individuals need to accept as known by science more than they can possibly comprehend on their own. Accordingly, they develop an expertise in knowing who knows what about what—one the reliability of which will be higher when they use it inside of affinity groups, whose members they can more efficiently interact and reliably read.
Usually, too, these groups, all of whom have their fair share of informed and educated and diversely experiened people who make it their businesss to know what’s known, guide their individual members toward the best available evidence relevant to their well being (in groups that didn’t do that reliably wouldn’t be of consequence in people’s lives for long!), and thus promote convergence on decision-relevant science among culturally diverse people.
But under unusual conditions, positions on risks or other facts addressed by decision-relevant science can become attached to social meanings that make them emblematic of membership in, and loyalty to, one’s group. When that happens, the social influence component of in-group affiliation will be dominant and will in fact frustrate convergence of diverse groups on the best available evidence—to the detriment of their individual members’ collective well- being.
That’s what drives conflicts over climate change, nuclear power, gun control, the HPV vaccine, etc. With respect to those kinds of issues—ones attended by antagonistic meanings—individuals are aggressively, albeit unconsciously fitting their assessments of evidence to views that predominate in their group in a manner that cannot be explained in a satisfactory way w/ a model that sees the effect as "informational" only.
a. One powerful source of evidence for this, I think, comes from studies in which we manipulate the *content* of the information and hold the *messenger* constant. In Cultural Cognition of Scientific Consensus, subjects are recognizing the expertise of a highly credentialed scientist conditional on the position he espouses being consistent with the one that predominates in their group. At that point, they can't be seen as "choosing" to credit an in-group member on a technical matter -- the scientist is the only information source on hand, and they are crediting him as someone "who knows what he is talking about" on a technical matter or not depending on whether doing so helps them to persist in holding a group that predominates in their group.
Or consider They Saw a Protest. There we did an experiment in which individuals viewed a *digital film* of a political protest & reported seeing acts of intimidation or alternative noncoercive speech conditional on whether the conclusion -- "people who advocate X are violent/reasoned" -- connected them to their groups. No in-group member telling them anything-- but a form of information processing that was posited to arise from the same mechanisms that are at work in conflicts over risk perception.
b. An even more powerful piece of evidence comes from experiments in which we show that the tendency to form group-congruent beliefs originates not in crediting any information source but in a biased use of the sort of reasoning dispositions & capacities that one would have to use to make sense of technical information oneself.
We've done two experiments like that, both of which are in the nature of follow-ups to our study of how scientific literacy enhances polarization on climate change. One of these experiments showed that "cognitive reflection," a disposition to use reflection, analytical reasoning as opposed to emotional, heuristic-driven reasoning accentuates ideological polarization when people are assessing a complex conceptual report relating to empirical data.
The other shows that subjects high in Numeracy, a capacity to reason with quantitative data, use that capacity selectively when drawing inferences from data on an ideologically controversial topic (gun control). In these cases, again, no one is deferring to a trusted in-group member on a technical matter (I've attached a draft paper in which describe the study; comments welcome!). People are reasoning for themselves, And the ones who we would recognize as being the best reasoners are the ones who are displaying the in-group effect to the greatest extent.
c. I think it makes perfect sense, sadly, that membership in the sorts of groups who share the "worldviews" we measure would generate a "social" as opposed to an informational effect only on belief formation.
What we are measuring are outlooks that likely will figure in the bonds of people who are intimately connected with one another. The benefits people derive from such associations are immense. The formation of views that could estrange people from those with whom they share those ties, then, could be devastating.
Meanwhile, for ordinary individuals at least, the cost of forming mistaken understandings on the science of things like climate change is essentially zero. Nothing they do in their individual lives-- as consumers, as voters, as participants in public discourse -- will have a material impact on risk or on policymaking; they don't matter enough as individuals to have that impact. So nothing the do in those capacities based on a mistake about the science can affect the risk they or anyone else they care about faces.
Thus, the cost of being out of line w/ group positions being high, and the cost of being out of line w/ decision relevant science on societal risks being low or zero, I think rational people will form patterns of engaging information situation that more reliably connect them w/ their group than with the best available evidence. Moreover, the ones who are better at reasoning -- the ones who are higher in science literacy, higher in cognitive reflection, higher in Numeracy-- will be all the more "successful" in using their reason this way.
5. It is based on this that I would react to the suggestion that connecting cultural cognition to an "in group" effect makes it sound more benign ("less evil").
I think what I've described is very malign-- very evil! The sorts of in-group effects here generate a predictable pressure -- one mediated by our own capacity for individual rationally -- that poses a tremendous threat to our collective well-being.
The entire spectacle, moreover, assaults and insults our reason-- the quality that marks our species as worthy of awe -- and mocks our fitness for self-government-- the form of political life that is in fact the one that our special status as reasoning beings compels we be afforded!
6. I'd be in despair, really, except for one thing: I think we can use our reason, too, to address the problem. The problem -- the denigration of our reason, and the resulting breakdown of processes of enlightened collective action -- is one that the members if all these groups have a stake in solving, since it puts them all at risk.
Moreover, the problem is one that admits of a solution. The sort of polarization we see on issues like climate change, nuclear power, the HPV vaccine, guns, etc. -- is not the norm. Usually the strategies we use, including the informational benefit we get from trusting those with whom we have deep affinities, brings us into convergence. The pathology that generates this very bad, very unusual state, It occurs when something very weird happens -- when a policy-relevant fact that admits of scientific investigation somehow becomes a badge of membership in & loyalty to one of these affinity groups, the state that generates the malign social in-group effect I have described.
That is not a problem in us, in our reasoning capacity; it is a problem in our science communication environment-- the common deliberative space in which we exercise our normal and normally reliable faculties for recognizing what's known to science.
Protecting the science communication environment -- thereby enabling culturally diverse people, who of course look to different sources to certify what is known, to converge on the best available evidence -- is exactly what the science of science communication is about.
Brewer, M.B., Kramer, R.M., Leonardelli, G.J. & Livingston, R.W. Social Cognition, Social Identity, and Intergroup Relations : A Festschrift in Honor of Marilynn Brewer. (Psychology Press, New York; 2011).
Mackie, D.M., Worth, L.T. & Asuncion, A.G. Processing of Persuasive in-Group Messages. Journal of Personality and Social Psychology 58, 812-822 (1990).
Sherman, D.K. & Cohen, G.L. Accepting Threatening Information: Self-Affirmation and the Reduction of Defensive Biases. Current Directions in Psychological Science 11, 119-123 (2002).