One of the “models” or metaphors I use to try to structure my thinking about (and testing of conjectures on) public conflict over decision-relevant science attributes that problem to a “polluted science communication environment.” This picture helps not only to sharpen one’s understanding of what the “science communication problem” consists in and what its causes are but also the identity and logic of remedies for it.
1. The science communication environment. People need to recognize as known by science many more things than they could understand or corroborate for themselves. They generally do this by immersing themselves in affinity groups—ones whose members share their basic outlooks on life, and whom they thus get along with and understand, and whose members can be relied upon to concentrate and transmit valid scientific insights (e.g., “bring your baby to the pediatrician—and not the faith healer!—if he or she becomes listless and develops a fever!”). These diverse networks of certification, then, can be thought of as the “science communication environment” in which culturally diverse citizens, exercising ordinary science intelligence, rationally apprehend what is known to science in a pluralistic society.
2. A polluted science communication environment. This system for (rationally!) figuring out “who knows what about what” breaks down, though, when risks or like policy-facts become entangled in contentious cultural meanings that transform them, in effect, into badges of membership in and loyalty to opposing groups (“your pediatrician advised you to give your daughter the HPV vaccine? Honey, you need to get a new doctor!”). At that point, the psychic stake that individuals have in maintaining their standing in their group will unconsciously motivate them to adopt modes of engaging information that more reliably connect them to their groups’ position than to the best available scientific evidence. These antagonistic cultural meanings are a form of pollution or contamination of ordinary citizens’ science communication environment that disables (quite literally!) the rational faculties by which individuals reliably apprehend collective knowledge.
3. Two remedial strategies. We can think of two strategies for responding to a polluted science communication environment. One is to try to decontaminate it by disentangling toxic meanings from cultural identities, and by adopting processes that prevent such entanglements from occurring in the first place.
Call this the mitigation strategy. We can think of “value affirmation,” “cultural source credibility,” “narrative framing” and like mechanisms as instances of it. There are others too, including systemic or institutional responses aimed at forecasting and avoiding the entanglement of decision-relevant science in antagonistic meanings.
A second strategy is adaptation. These are devices that counteract the consequences of a contaminated science communication environment not by dispelling it but rather by strengthening the cognitive processes that are disabled by it—or that activate alternative, complimentary cognitive processes that help to compensate for such disablement.
Again, there are a variety of examples. E.g., satire uses humor to lure individuals into engaged reflection with evidence that might otherwise trigger identity-defensive resistance. Self-affirmation is similarly thought to furnish a buffer against the anxiety associated with critically re-examining beliefs that have come to symbolize allegiance to one or another opposing cultural style.
Or consider curiosity. Curiosity is the motivation to experience the pleasure of discovering something new and surprising. In this state (I conjecture), the defensive processes that block open-minded engagement with valid evidence that challenges existing identity-congruent beliefs are silenced.
We could thus see efforts to cultivate curiosity as a character disposition or to concentrate engagement with decision-relevant science in locations (e.g., museums or science-entertainment media) that predictably excite curiosity as a way to neutralize the detrimental impact of the entanglement of risks and other policy-relevant facts with antagonistic cultural meanings.
I’m sure there are more devices and techniques that operate this way—that is operate to rehabilitate disabled faculties or activate alternatives within a polluted science communication environment. One of the aims of the science of science communication, as a “new political science,” should be to identify and learn how to deploy them.
4. Pragmatic “scicomm environmental protection.” Just as mitigation and adaptation are not mutually exclusive strategies for responding to threats to the natural environment, so I would argue that mitigation and adaptation of the sort I’ve just described are not mutually exclusive responses to a polluted science communication environment. We should be empirically investigating both as part of the program to identify the most reliable means of repelling the threat that a polluted science communication environment poses to the Liberal Republic of Science.