Key Insight
The paper isn’t exactly hot off the press, but someone recently lowered my entropy by sending me a copy of Stocklmayer, S. M., & Bryant, C. Science and the Public—What should people know?, International Journal of Science Education, Part B, 2(1), 81-101 (2012). Cool article! The piece critiques the NSF’s Science Indicators “factual knowledge” questions. As is well ... Read more
The paper isn’t exactly hot off the press, but someone recently lowered my entropy by sending me a copy of Stocklmayer, S. M., & Bryant, C. Science and the Public—What should people know?, International Journal of Science Education , Part B, 2(1), 81-101 (2012).
The piece critiques the NSF’s Science Indicators “factual knowledge” questions.
As is well known to the 9.8 billion readers of this blog (we’re down another couple billion this month; the usual summer-holiday lull, I’m sure), the Indicators battery is pretty much the standard measure for public “science literacy.”
The NSF items figure prominently in the scholarly risk perception/science communication literature.
With modest additions and variations, they also furnish a benchmark for various governmental and other official and semi-official assessments of “science literacy” across nations and within particular ones over time.
I myself don’t think the Indicators battery is invalid or worthless or anything like that.
But like pretty much everyone I know who uses empirical methods to study public science comprehension, I do find the scale unsatisfying .
What exactly a public sicence comprehension scale should measure is itself a difficult and interesting question. But whatever answer one chooses, there is little reason to think the Indicators’ battery could be getting at that.
The Indicators battery seems to reduce “science literacy” to a sort of catechistic assimilation of propositions and principles: “The earth goes around the sun , not the other way ’round”[ check ]; “electrons are smaller that atoms” [ check ]; “antibiotics don’t kill viruses —they kill bacteria !,” [ check! ].
We might expect an individual equipped to reliably engage scientific knowledge in making personal life decisions, in carrying out responsibilities inside of a business or as part of a profession, in participating in democratic deliberations, or in enjoying contemplation of the astonishing discoveries human beings have made about the workings of nature will have become familiar with all or most of these propositions.
But simply being familiar with all of them doesn’t in itself furnish assurance that she’ll be able to do any of these things.
What does is a capacity —one consisting of the combination of knowledge, analytical skills, and intellectual dispositions necessary to acquire, recognize, and use pertient scientific or empirical information in specified contexts. It’s hardly obvious that a high score on the NSF’s “science literacy” test (the mean number of correct reponses in a general population sample is about 6 of 9) reliably measures any such capacity—and indeed no one to my knowledge has ever compiled evidence suggesting that it does.
This—with a lot more texture, nuance, and reflection blended in—is the basic thrust of the S&B paper.
The first part of S&B consists of a very detailed and engaging account of the pedigree and career of the Indictors’ factual-knowledge items (along with various closely related ones used to supplement them in large-scale recurring public data collections like the Eurobarometer).
What’s evident is how painfully innocent of psychometric and basic test theory this process has been.
The items, at least on S&B’s telling, seem to have been selected casually, more or less on the basis of the gut feelings and discussions of small groups of scientists and science authorities.
Aside from anodyne pronouncements on the importance of “public understanding of science” to “national prosperity,” “the quality of public and private decision-making,” and “enriching the life of the individual,” they made no real effort to articulate the ends served by public “science literacy.” As a result, they offered no cogent account of the sorts of knowledge, skills, dispositions, and the like that securing the same would entail.
Necessarily, too, they failed to identify the constructs —conceptual representations of particular skills and dispositions—an appropriately designed public science comprehension scale should measure.
Early developers of the scale reported Cronbach’s alpha and like descriptive statistics, and even performed factor analysis that lent support to the inference that the NSF “science literacy” scale was indeed measuring something .
But without any theoretical referent for what the scale was supposed to measure and why , there was necessarily no assurance that what was being measured by it was connected to even the thinly specified objectives the proponents of them had in mind.