"Public comprehension of science--believe it or not!": the public and decision-relevant science, part 1
Gave talk yesterday at a meeting of the Public Interfaces of the Life Sciences Iniative of the National Academy of Sciences. The aim of the Initiative is to identify various avenues—in education, in political life, and in civil society—for enlarging the role that the life sciences play in everyday life.
The Initiative is typical of the leadership role the NAS has fittingly assumed in integrating the practice of science with the scientific study of how ordinary citizens come to know what is known by science—a commitment on the Academy’s part that was highlighted in its Science of Science Communication Sackler colloquium in the Spring of 2012.
My talk was on how the pubic thinks about decision-relevant science. This is part 1 of 2. But slides for whole thing here.
As is well-known to readers of this blog, I believe that doing and communicating science are very different things, even when the sort of science being done is the science of science communication. Indeed, I believe the “science communication problem”—the persistent failure of the availability of valid science to quiet public controversy over risks and other policy-relevant facts to which that science speaks in a compelling way—is a consequence of our society's failure to devise practices and construct institutions that recognize fully the significance of the communicating-doing distinction.
To effectively communicate this point, I thought I would demonstrate what strikes me—as someone who only who does the science of science communication—as a clever way to communicate what I know to the public.
I told my audience that I would present the first part of my remarks in the style of a “reality tv” program or the like entitled, “Public comprehension of science—believe it or not!,” a show dedicated to sharing with viewers instances of the myriad “ ‘strange but true’ characteristics of the public’s knowledge of what science knows.”
This week’s episode (I told them) would feature three stories:
1. Evolution: “believing,” “disbelieving” & understanding
About half of the general public in the U.S. does not “believe” that humans “evolved” from other animal species. They “believe” instead that humans were created, as is, by God.
This not surprising news to regular viewers of this program—or likely to anyone else. We are reminded of this fact at least once a year by Gallup, which has been polling Americans about their “belief” in evolution—and reporting more or less the same result—for many many years.
The “strange but true” thing is this: the half of the U.S. population that does “believe” in evolution is no more likely than the half that doesn’t to be able to be pass a high school biology test on the rudiments of how evolution works.
There is, researchers have found again and again, no correlation between whether someone says they “believe” in evolution and their understanding of the concepts of “natural selection,” “genetic variance,” and “random mutation”—the basic elements of the dominant, “modern synthesis” position in the science of evolution.
In fact, distressingly few of either the believers or disbelievers have an accurate comprehension of these dynamics.
And there’s another curious thing about “belief” & “disbelief” in evolution.
It’s definitely possible to teach people the basic elements of the modern synthesis, which are remarkably and elegantly simple. The evidence that supports them is reasonably straightforward too.
But imparting such understanding also has zero effect on the likelihood that those who then demonstrate basic comprehension of evolution say they “believe” in it!
Strange but true!
2. Climate change risk perceptions: “fast” & “slow”
This week’s second story involves public comprehension of climate science.
This was the conclusion of a very impressive 1992 study, which found that those members of the public who believed climate change was occurring tended to attribute it to holes in the ozone layer and other irrelevant phenomena.
When researchers re-did the study in 2009, the public was still woefully ignorant of elementary climate science. They found, of course, that a great many members of the public didn’t accept that global temperatures were increasing as a result of human CO2 emissions.
But even among the segment of the public who said they did accept this, the researchers found myriad, remarkable misunderstandings, including the belief that aerosol spray cans were one source of the problem and that cleaning up toxic waste sites would help to ameliorate it.
And here’s another thing.
The public tends to over-rely on cognitive heuristics in forming perceptions of risk. This is the theme, of course, of Daniel Kahneman’s Nobel Prize winning work, and his excellent book Thinking, Fast and Slow.
Various commentators who draw on Kahneman’s work (but interestingly not Kahneman himself, to my knowledge) assert that “bounded rationality” of the sort documented in this work explains why members of the general public don’t universally share climate scientist’s concern about the dangers that climate change poses to human wellbeing.
But social science evidence has established that those members of the public who are the most science literate, and who score highest in measures of the disposition to use reflective modes of reasoning (the “slow” kind, in Kahneman’s typology) are in fact the most culturally polarized on climate change risks!
As members of the public become more science literate, more numerate, and the like, they don’t converge on what climate scientists know. They just become more reliable “indicators” of what people who hold particular cultural values believe.
Believe it or not . . . .
3. Antibiotics: consensus, scientific & public
The last story for this week concerns antibiotics.
There is really no meaningful public controversy—cultural or otherwise—over whether someone who is not feeling well should seek medical treatment, and should take antibiotics if his or her physician prescribes them.
But 50% of the U.S. public believes that antibiotics kill viruses and not just bacteria.
This is a consistent finding in studies that administer the NSF’s “Science Indicators,” the standard “science literacy test” used to measure what members of the public know about basic science—not just in the U.S. but globally.
Now in fact, the question is a “true-false” one, and so one might conclude that members of the U.S. public are doing no better than chance in their responses here.
But interestingly, U.S. respondents score consistently higher than members of the public from other countries, including Japan, Russian, South Korea, and the EU nations. So really, we “know more” science than they do here.
Indeed, members of the public in the US tend to score higher on lots of items on the NSF science literacy test. It really is tempting to say that the US is more science literate than the rest of the world!
Except that members of the rest of the world do so much better than we do on the NSF indicator item that asks whether humans evolved from other animals . . . .
But you know what that actually signifies? That the NSF item on “evolution” isn’t measuring the same thing as the rest of the test. Those who consistently get 90+% of the questions are only slightly more likely than 50% likely to correctly answer the evolution question.
Actually, that shouldn’t surprise you at this point: it follows, almost logically, from the first story in this show, which related that there is really no relationship between saying one “believes” evolution and having and being able to form an accurate scientific understanding of evolutionary theory.
Social scientists have demonstrated that the “evolution” question is actually not measuring the same “science comprehension” quality in people who take the NSF science literacy test as the other items. It is measuring their religiosity.
Yet proposals to exclude the evolution question from measures of “science literacy” in studies that correlate science literacy with other attitudes tend to provoke significant controversy. Critics say the item should be included even though it indisputably reduces the precision of the science literacy score as a measure of a latent science comprehension aptitude or disposition.
Sad but true. . . .
Next time: Five theses on public understanding and decision-relevant science, each of which can be illustrated using the three stories from this week’s episode of “Public Comprehension of Science—Believe it or Not!”
Not to give anything away, but if you think that what I’ve told you so far means (or even means that I think) the public is irrational, you are very wrong.
Wrong about what it means, and wrong about what public rationality and its relationship to decision-relevant science consist in.