Key Insight

This semester I’m teaching a course entitled the Science of Science Communication. Posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to ... Read more

This semester I’m teaching a course entitled the Science of Science Communication. Posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general.

In Session 2 (i.e., our 2nd class meeting) we started the topic of “science literacy and public attitudes.” We (more or less) got through “science literacy”; “Public attitudes” will be our focus in Session 3.

As I conceptualize it, this topic is in nature of foundation laying. The aim of the course is to form an understanding of the dynamics of science communication distinctive of a variety of discrete domains. In every one of them, however, effective communication will presumably need to be informed by what people know about science, how they come to know it, and by what value they attach to science’s distinctive way of knowing . So we start with those.

By way of synthesis of the readings and the “live course” (as opposed not to “dead” but “on line”) discussion of them, I will address these points: (1) measuring “ordinary science intelligence”—what & why; (2) “ordinary science intelligence” & civic competence; (3) “ordinary science intelligence” & evolution; and (4) “ordinary science intelligence” as an intrinsic good.

1. “Ordinary science intelligence” (OSI): what is being measured & why?

There are many strategies that could be, and are, used to measure what people know about science and whether their reasoning conforms to scientific modes of attaining knowledge. To my mind at least, “science literacy” seems to conjure up a picture of only one such strategy—more or less an inventory check against a stock of specified items of factual and conceptual information. To avoid permitting terminology to short circuit reflection about what the best measurement strategy is, I am going to talk instead of ways of measuring ordinary science intelligence (“OSI”), which I will use to signify a nonexpert competence in, and facility with, scientific knowledge.

I anticipate that a thoughtful person (like you; why else would you have read even this much of a post on a topic like this?) will find this formulation question-begging. A “nonexpert competence in and facility with scientific knowledge? What do you mean by that ?”

Exactly. The question-begging nature of it is another thing I like about OSI. The picture that “science literacy” conjures up not only tends to crowd out consideration of alternative strategies of measurement; it also risks stifling reflection on what it is that we want to measure and why . If we just start off assuming that we are supposed to be taking an inventory, then it seems natural to focus on being sure we start with a complete list of essential facts and methods.  But if we do that without really having formed a clear understanding of what we are measuring and why, then we’ll have no confident basis for evaluating the quality of such a list—because in fact we’ll have no confident basis for believing that any list of essential items can validly measure what we are interested in.

If you are asking “what in the world do you mean by ordinary science intelligence ?” then you are in fact putting first things first. Am I–are we –trying to figure out whether someone will engage scientific knowledge in a way that assures the decisions she makes about her personal welfare will be informed by the best available evidence? Or that she’ll be able competently to perform various professional tasks (designing computer software, practicing medicine or law, etc.)? Or maybe to perform civic ones—such as voting in democratic elections? If so, what sort of science intelligence do each of those things really require? What’s the evidence for believeing that? And what sort of evidence can we use to be sure that the disposition being measured really is the one we think is necessary?

If those issues are not first resolved, then constructing and assessing measures of ordinary scientific intelligence will be aimless and unmotivated. They will also, in these circumstances, be vulnerable to entanglement in unspecified normative objects that really ought to be made explicit, so that their merits and their relationship to science intelligence can be reflectively addressed.

2. Ordinary science intelligence and civic competence

Jon Miller has done the most outstanding work in this area, so we used his self-proclaimed “what and why” to help shape our assessment of alternative measures of OSI.  Miller’s interest is civic competence. The “number and importance of public policy issues involving science or technology,” he forecasts, “will increase, and increase markedly” in coming decades as society confronts the “biotechnology revolution,” the “transition from fossil-based energy systems to renewable energy sources,” and the “continuing deterioration of the Earth’s environment.” The “long-term healthy of democracy,” he maintains, thus depends on “the proportion of citizens who are sufficiently scientifically literate to participate in the resolution of” such issues.

We appraised two strategies for measuring OSI with regard to this objective. One was Miller’s “civic science literacy” measure. In the style of an inventory, Miller’s measure consists of two scales, the first consisting largely of key fact items (“Antibiotics kills viruses as well as bacteria [true-false]”; “Doest the Earth go around the Sun, or the Sun go around the Earth?”), and the latter at recognition of signature scientific methods, such as controlled experimentation (he treats the two as separate dimensions, but they are strongly correlated: r = 0.86). Miller’s fact items form the core of the National Science Foundation’s “Science Indicators,” a measure of “science literacy” that is standard among scholars in this field. Based on rough-and-ready cutoffs, Miller estimates that only 12% of U.S. citizens qualify as fully “scientifically literate” and that 63% are “scientifically illiterate”; Europeans do even worse (5%, and 73%, respectively).

The second strategy for measuring OSI evaluates what might be called “scientific habits of mind.” The reason to call it that is that it draws inspiration from John Dewey, who famously opposed a style of science education that consists in the “accumulation of ready-made material,” in the form of canonical facts and standard “physical manipulations.” In its place, he proposed a conception of science education that imparts “a mode of intelligent practice, an habitual disposition of mind” that conforms to science’s distinctive understanding of the “ways by which anything is entitled to be called knowledge.”

There is no standard test (as far as I know!) for measuring this disposition. But there are various “reflective reasoning” measures–“Cognitive Reflection Test” (Frederick), “Numeracy” (Lipkus & Peters), “Actively Open Minded Thinking” (Baron, & Stanovich & West), “Lawson’s Classroom Test of Scientific Reasoning”– that are understood to assess how readily people credit, and how reliably they make active use of, the styles of empirical observation, measurement, and inference (deductive and inductive) that are viewed as scientifically valid.

The measures used for “science literacy” and “scientific habits of mind” strike me as obviously useful for many things. But it’s not obvious to me that either of them is especially suited for assessing civic competence.

Miller’s superb work is focused on internally validating the “civic scientific literacy” measures, not externally validating them. Neither he nor others (as far as I know; anyone who knows otherwise, please speak up!) has collected any data to determine whether his “cut offs” for classifying people as “literate” or “illiterate” predicts how well or poorly they’ll function in any tasks that relate to democratic citizenship, much less that they do so better than more familiar benchmarks of educational attainment (high-school diploma and college degrees, standardized test scores, etc.). Here’s a nice project for someone to carry out, then.

The various “reflective reasoning” measures that one might view as candidates for Dewey’s “habit of mind” conception of OSI have all been thoroughly vetted—but only as predictors of educational aptitude and reasoning quality generally. But they also have not been studied in any systematic ways as markers of civic aptitude.

Indeed, there is at least one study that suggests that neither Miller’s “civic science literacy” measures nor the ones associated with the “scientific habits of mind” conceptioin of OSI predict quality of civic engagement with what is arguably the most important science-informed policy issue now confronting our democracy: climate change. Performed by CCP, the study in question examined science comprehension and climate-change risk perceptions . It found that public conflict over the risks posed by climate change does not abate as science literacy, measured with the “NSF science indicator” items at the core of Miller’s “civic science literacy” index, and reflective reasoning skill, as measured with numeracy, increase. On the contrary, such controversy intensifies : cultural polarization among those with the highest OSI measured in this way is significantly greater than polarization among those with the lowest OSI.

We also discussed one more conception of OSI: call it the “science recognition faculty”.  If they want to live good lives—or even just live—people, including scientists, must accept as known by science many more things then they can possibly comprehend in a meaningful way. It follows that their well-being will thus depend on their capacity to be able to recognize what is known to science independently of being able to verify that, or understand how, science knows what it does. “Science recognition faculty” refers to that capacity.