Weekend update: the anti- "fact inventory conception of science literacy" movement is gaining ground on Tea Party & Trascism [Trump+Fascism]; to eclipse them, only thing it needs is a catchier name!
A friend pointed me toward this really interesting article:
The bigger issue, however, is whether we ought to call someone who gets those questions right “scientifically literate.” Scientific literacy has little to do with memorizing information and a lot to do with a rational approach to problems....
[T]he interpretation of data requires critical thinking.... Our schools don’t train people to be vigilant about avoiding errors such as confounding correlation and causation, however, nor do they do a good job of rooting out confirmation bias or teaching the basics of statistics and probabilities. All of this leads to the propagation of a lot of nonsense in the press and internet, and it leaves people vulnerable to the flood of “facts.”
It’s not possible for everyone—or anyone—to be sufficiently well trained in science to analyze data from multiple fields and come up with sound, independent interpretations. I spent decades in medical research, but I will never understand particle physics, and I’ve forgotten almost everything I ever learned about inorganic chemistry. It is possible, however, to learn enough about the powers and limitations of the scientific method to intelligently determine which claims made by scientists are likely to be true and which deserve skepticism. . . . Most importantly, if we want future generations to be truly scientifically literate, we should teach our children that science is not a collection of immutable facts but a method for temporarily setting aside some of our ubiquitous human frailties, our biases and irrationality, our longing to confirm our most comforting beliefs, our mental laziness. Facts can be used in the way a drunk uses a lamppost, for support. Science illuminates the universe.
For sure I couldn't have said this better. Anyone can confirm this for him- or herself by reviewing the various posts I've written criticizing the "fact inventory" conception of science literacy and defending an "ordinary science intelligence" alternative that features the types of critical reasoning proficiencies essential to recognizing and making use of valid scientific evidence.
Maybe I'm jumping the gun, but I hope this thoughtful and reflective article is a harbinger of more of the same, and the beginning of a wider discussion of this problem.
If I have any quibble with Teller's argument, though, it is over what the nature of the problem actually is.
Teller starts with the premise that the U.S. public has a poor comprehension of science and attributes this to the "fact inventory" conception of science literacy.
She might be right-- but I'm not sure.
I'm not sure, that is, that the American public's science comprehension is as poor as she assumes it is. The reason I'm not sure is that I don't think we've been assessing the general public's science comprehension with a valid measure of that capacity -- one that features critical reasoning proficiencies rather than a"fact inventory"!
Developing a public science comprehension measure focused on the reasoning proficiencies that Teller conviciningly emphasizes has been one focus of CCP reasearch over the last few years. The progress made so far in that effort is reflected in the current version, "2.0," of the "Ordinary Science Intelligence" assessment test (Kahan in press).
As discussed in previous posts, OSI_2.0 doesn't try to certify respondents' acquisition of any set of canonical "factual" beliefs.
Instead, it uses quantitative and critical reasoning items that are intended to assess a latent or unobserved disposition suited for recognizing and making appropriate use of valid empirical evidence in one's "ordinary," everyday life as a consumer, a participant in today's economy, and as a democratic citizen.
Since at least 1910 (my memory is hazy for events earlier than that), when Dewey published his famous "Science as Subject-Matter and as Method," the idea that science pedagogy should be focused on cultivating the distinctive reasoning proficiencies associated with making valid inferences from reliable observations has exerted a powerful force on the imaginations and motivations of a good number of educators and scholars (today I think of Jon Baron (1993, 2008) as the foremost champion of this view).
One thing they've learned is that imparting this sort of capacity is easier said than done!
But in any event, they are right -- as is Teller -- that this kind of thinking disposition is the proper object of science education.
The much more pedestrian point I find myself making now & again is that we really don't have a good general public measure of this capacity -- and so aren't even in a good position to figure out how well or poorly we are doing in equipping citizens with it.
Necessarily, too, without such a good measure, we won't be as smart as we ought to be about what contribution defects in science comprehension are making, if any, to public controversies over climate change, nuclear power, the HPV vaccine, and other issues that turn on decision-relevant science.
Teller cites the 2012 CCP study that found that higher science literacy is associated with greater polarization, not less, on climate change risks (nuclear power ones too).
I think that study helps to show that this sort of conflict is not plausibly attributed to defects in science comprehension. Precisely b/c I and my collaborators agree with Teller that a "fact inventory" conception of "science literacy" is defective, we used a science comprehension measure-- OSI_1.0-- that combined certain NSF Indicator "basic fact" items with a Numeracy battery, which has been shown to be highly effective in measuring the capacity of ordinary members of the public & others to reason well with quantitative information.
And the same is true of people who score the highest on even reasoning-proficiency centered OSI_2.0:
But the few who actually can reliably identify its causes and consequences (as measured by version 1.0 of the "Ordinary Climate Science Intelligence" test, an assessment based on "climate science literacy" items drawn from NASA, NOAA, and the IPCC) are also the most politically polarized on the question of whether human activity is the principal cause of climate change -- or indeed on whether climate change is happening at all (Kahan 2015a).
That evidence has lead me to conclude that the conflict over climate change (not to mention numerous other disputed issues of science) isn't about what people know. It is about who they are: the "beliefs" people form on these issues are ones suited to helping them form affective orientations toward these issues that effectively signal their membership in & loyalty to groups embroiled in a nasty form of cultural status competition....
That problem isn't being caused by any deficiency in science education in this country.
On the contrary, that problem is preventing our democracy from getting the benefit of whatever scientific knowledge & reasoning capacity we have managed to impart in our citizens.
If we want enlightened democracy, we better figure out how to extricate science from these sorts of ugly, illiberal, reason-eviscerating forms of cultural conflict (Kahan 2015b).
Of course, these are provisional conclusions, informed by what I regard as the best available evidence.
But the best evidence available definitely isn't as good as it should be for exactly the reason that Teller describes so articulately: we don't possess as good a measure of public science comprehension as we ought to have.
The scale development exercise that generated OSI_2.0 is offered as an admittedly modest contribution to an objective of grand dimensions. How ordinary citizens come to know what is collectively known by science is simultaneously a mystery that excites deep scholarly curiosity and a practical problem that motivates urgent attention by those charged with assuring democratic societies make effective use of the collective knowledge at their disposal. An appropriately discerning and focused instrument for measuring individual differences in the cognitive capacities essential to recognizing what is known to science is essential to progress in these convergent inquiries.
The claim made on behalf of OSI_2.0 is not that it fully satisfies this need. It is presented instead to show the large degree of progress that can be made toward creating such an instrument, and the likely advances in insight that can be realized in the interim, if scholars studying risk perception and science communication make adapting and refining admittedly imperfect existing measures, rather than passively employing them as they are, a routine component of their ongoing explorations.
Not as articulate as Teller-- but the best I can do!
And hey-- if my best motivates others who can do a better job still, then I figure I'm doing my part.
Baron, J. Why Teach Thinking?‐An Essay. Applied Psychology 42, 191-214 (1993).
Dewey, J. Science as Subject-matter and as Method. Science 31, 121-127 (1910).
Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res. (in press).
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.