Key Insight
The NRA gets science communication. In fact, it understands something that many groups that at least purport to be committed to promoting constructive public engagement with the best available scientific evidence don’t. Of course, it uses what it understands for a purpose very distinct from promoting such engagement. Indeed, it uses its knowledge about how diverse, ordinary people ordinarily come to ... Read more
The NRA gets science communication.
In fact, it understands something that many groups that at least purport to be committed to promoting constructive public engagement with the best available scientific evidence don’t.
Of course, it uses what it understands for a purpose very distinct from promoting such engagement. Indeed, it uses its knowledge about how diverse, ordinary people ordinarily come to know what they know about decision-relevant science in a manner that effectively impedes their convergence on evidence essential to their common welfare.
This makes the NRA a truly evil entity—a kind of syndicalist element subversive of the Constitution of the Liberal Republic of Science .
But one can still actually learn something from seeing what it knows and what it does.
The point the NRA gets—and that many other groups that I think have admirable aims don’t, and that makes them tend to do a bad job—is that effective communication of decision-relevant science depends on the quality of the science communication environment.
The science communication environment is the sum total of cues, influences, and process that enable people to recognize as known by science so many more things than they could possibly form a meaningful understanding of for themselves. The number of things that fit into that category is immense—from the contribution that antibiotics make to treating diseases to the validity of modern telecommunications technologies they rely on to transmit data, from the reliability of their vehicle’s GPS systems to the public health benefits of pasteurization of raw milk, from the nontoxicity of pressed wood products manufactured subject to state and federal formaldehyde limits to the nutritional value of food products (massive amounts of them in the US) that are prepared with GM technology.
One of the most vital constituents of the science communication environment is the existence of authoritative networks of certification.
I’m talking, really, just about the role that the utterly ordinary, every-day communities individuals inhabits—the ones that comprise their neighbors, their friends, their trusted coworkers, and myriad professions they rely on, from doctors to auto mechanics to accountants to insurance adjusters.
These communities are flush with reliable, valuable guidance that individuals can use to determine what’s known to science. Of course, they are also coursing with bogus information too—unsupported and unsupportable claims about the dangers of everyday products (“watch out—cell phone radiation causes brain tumors!”) and absurd claims about health remedies (“ach—don’t do chemotherapy for your breast cancer; yoga will do the trick!”)
People sort out one from the other—again, not because they are experts on the claims that are being made what science knows, but because they are experts at something else: figuring out who actually knows what they are talking about , and can be relied upon to transmit the best available evidence in a reliable and accurate manner.
This is the key to understanding why the transmission of knowledge tends to have a culturally insular quality to it.
The communities of certification people tend to resort to orient themselves appropriately with respect to decision-relevant science are ones made up of people who share basic outlooks on the good life. People enjoy spending time with people like that and tend to form important projects with them. They can read those people more easily—and distinguish the genuinely knowledgeable from the bullshitters among them more readily—than they can when they are engaging people whose cultural orientation is very different from their own.
We live in a society that tolerates and celebrates cultural diversity (a fact that is actually essential to the progress of scientific discovery ), and therefore the number of communities people rely on to perform this certification function is large.
But that’s generally not a problem. These communities are all in touch with what science knows. They all generally lead their members to the same conclusions.
Indeed, if there was a community that consistently misled its members on what science knows, the members of that group, given how important decision-relevant science is to their own well-being, wouldn’t last very long.
Nevertheless, every once in a while a risk or other policy-relevant fact becomes engaged in antagonistic cultural meanings that convert positions on it, in effect, into badges of membership in and loyal to opposing cultural groups.
When that happens, members of diverse cultural groups won’t converge on the best available evidence . Instead—using the very same normal, and normally reliable cues to ascertain what’s known to science—they will polarize.
The stake that any ordinary person has in protecting the status of, and his or her standing in, one of these groups tends to exceed the significance of the stake that person has, as an individual, in forming scientifically informed personal beliefs. As a result, individuals, in this circumstance, will predictably engage information in a manner more reliably geared to forming beliefs that match the ones the position identified with their group than the ones most supported by the best available scientific evidence.
Indeed, in these circumstances, individuals endowed with the capacities and dispositions most strongly associated with science comprehension will use these abilities in an opportunistic fashion to serve the goal have to conform the evidence the encounter or actively seek out to the position that is predominant in their cultural group.