From something I'm working on. Anyone of the 14 billion regular readers of this blog could fill in the rest. But if you are one of 1.3 billion people who on any given day visit this site for the first time, there's more on the "'Two climate changes' thesis" here & here, among other places. . . .
America’s two “climate changes”
There are two climate changes in America: the one people “believe” or “disbelieve” in in order to express their cultural identities; and the one people ("believers" & "disbelievers" alike) acquire and use scientific knowledge about in order to make decisions of consequence, individual and collective. I will present various forms of empirical evidence—including standardized science literacy tests, lab experiments, and real-world field studies in Southeast Florida—to support the “two climate changes” thesis. I will also examine what this position implies about the forms of deliberative engagement necessary to rid the science communication environment of the toxic effects of the first climate change and to make it habitable for enlightened democratic engagement with the second.
Do science curious evolution believers and science curious nonbelievers both like to go to the science museum? How about to gun shows?
I've described highlights from the first study (a more complete report on which can be downloaded here) in some earlier posts. They include the development of a behaviorally validated "science curiosity" scale (one that itself involves performand and behavioral measures and not just self-reported interest ones), and the successful use of that scale to predict "engagement" --measured behaviorally, and not just with self-reported interest--in the cool Tangled Bank Studios documentary on evolution, Your Inner Fish.
Stay tuned for more reports about our findings in this ongoing project.
But for now, consider these interesting findings about the power of "SCS_1.0," the science curiosity scale we constructed, to predict one or another types of behavior.
The graphic shows, not surprisingly, that those who are more science curious are way more likely to do things like read science books and attend science museums.
Probably not that surprisingly, they might be slightly more likely to do other things, too, like go to an amusement park-- or even a gun show than science uncurious people. But they really aren't much more likely to do those thngs than the average member of the population.
In addition to estimating the predicted mean probabilities for these activities conditional on science curiosity for the entire sample (a large nationally represenative one), I've also estimated the predicted mean probabilities for individuals who say they "do" and "don't believe in" human evolution:
One of the coolest things we found in ESFI Study No. 1 was that science curious individuals who "disbelieve in" evolution were just as engaged as science curious individuals who do believe in evolution. In addition, they were both substantially more engaged than their science-noncurious counterparts, most of whom yawned and turned the show off after a couple of minutes, no doubt hoping that the survey would resume its focus on Honey Boo Boo, "Inflate-gate," and other non-science related topics used to winnow out those less interested in science than in other interesting things.
Individuals who "disbelieve" in evolution but who were high in science curiosity also indicated that they found the information in the documentary clip valid and convincing as an account of the origins of human color vision.
Of course, that didn't "change their minds" on evolution. Their beliefs on that measure who they are—not what they know about science or what more they’d like to know about what human beings have discovered using science's signature methods of disciplined observation and inference. The experience of watching the cool Your Inner Fish clip satisfied their appetite to know what science knows but it didn't make them into different people!
Indeed, I think it likely succeeded in the former precisely because it didn't evince any interest in accomplishing the latter. It didn't put science curious people who have an identity associated with disbelief in evolution in the position of having to choose between being who they are and knowing what science knows.
Satisfying this criterion, which I've taken to calling the "disentanglement principle," is, I believe, a key element of successful science communication in pluralistic liberal society (Kahan 2015a, 2015b).
Anyway, check out what evolution believers & disbelievers do in their free time conditional on having the same level of science curiosity.
Many of the same things -- but not all!
I have ideas about what this means. But I'm out of time for today! So how about you tell me what you make of this?
Plata's Republic: Justice Scalia and the subversive normality of politically motivated reasoning . . . .
. . . Plata's Republic . . .
Civis: It is “fanciful,” you say, to think that three district court judges “relied solely on the credibility of the testifying expert witnesses” in finding that release of the prisoners would not harm the community?
Cognoscere Illiber: Yes, because “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.”
Civis: “Of course” judges with “different policy views” would have formed different beliefs about the consequences if they had evaluated the same expert evidence? Why? Surely the judges, like all nonspecialists, would agree that these are matters outside their personal experience. Are you saying the judges would ignore the experts and decide on partisan grounds?
Cognoscere Illiber: No. “I am not saying that the District Judges rendered their factual findings in bad faith. I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” on such matters. The “expert witnesses” here were of the sort trained to make “broad empirical predictions”—like whether “deficit spending will . . . lower the unemployment rate” or “the continued occupation of Iraq will decrease the risk of terrorism.”
Civis: But people normally assert that their policy positions on criminal justice, economic policy, and national security are based on empirical evidence. It almost sounds as if are you saying things are really the other way around—that what they understand the empirical evidence to show is “necessarily based in large part upon policy views.”
Cognoscere Illiber: Exactly what I am saying! Those sorts of “factual findings are policy judgments.” Thus, empirical evidence relating to the consequences of law should be directed to “legislators and executive officials”—not “the Third Branch”—since in a democracy it is the people’s “policy preferences,” not ours, that should be “dress[ed] [up] as factual findings.”
Civis: Ah. Thanks for telling me—I had been naively taking all the empirical arguments in politics at face value. Silly me! Now I see, too, that those naughty judges were just trying to exploit my gullibility about policy empiricism. Shame on them!
 Plata, 131 S. Ct. at 1954 (Scalia, J., dissenting).
 Id. at 1954-55.
 Id. at 1954.
 Id. at 1955.
* * *
Brown v. Plata was among the most consequential decisions of the 2010 Term—in multiple senses. In Plata, California attacked an order, issued by a three-judge federal district court, directing the state to release more than 40,000 inmates from its prisons. It was not disputed that California prisons had for over a decade been made to store double their intended capacity of 80,000 inmates. The stifling density of the population inside—“200 prisoners . . . liv[ing] in a gymnasium,” sleeping in shifts and “monitored by two or three guards”; “54 prisoners . . . shar[ing] a single toilet”; “50 sick inmates . . . held together in a 12- by 20-foot” cell; “suicidal inmates . . . held for prolonged periods in telephone-booth sized cages” ankle deep in their own wastes—was amply documented (with photographs, appended to the Court’s opinion, among other things). The awful effect on the prisoners’ mental and physical health was indisputable, too (“it is an uncontested fact that, on average, an inmate in one of California’s prisons needlessly dies every six to seven days”). These conditions, the district court concluded, violated the Eighth Amendment. The district court also saw that there was no prospect whatsoever that the state, having repeatedly rejected prison-expansion proposals and now in a budget crisis, would undertake the massive expenditures necessary to increase prison capacity and staffing. Accordingly, it ordered the only relief that, to it, seemed, possible: the release of the number of inmates that the court deemed sufficient to bring the prison’s into compliance with minimally acceptable constitutional standards.
The Supreme Court, in a five to four decision, affirmed. The major issue of contention between the majority and dissenting Justices was what consequence the ordered prisoner release would have on the public safety, a consideration to which the district court was obliged to give “substantial weight’” by the Prison Litigation Reform Act of 1995. The district court devoted 10 days of the 14-day trial to receiving evidence on this issue, and concluded that use of careful screening protocols would permit the state to release the necessary number of inmates “in a manner that preserves public safety and the operation of the criminal justice system.”
The determinations underlying this finding, Justice Kennedy noted in his majority opinion, “are difficult and sensitive, but they are factual questions and should be treated as such.” The district court had “rel[ied] on relevant and informed expert testimony” by criminologists and prison officials, who based their opinion on “empirical evidence and extensive experience in the field of prison administration.” Indeed, some of that evidence, Justice Kennedy observed, had “indicated that reducing overcrowding in California’s prisons could even improve public safety” by abating prison conditions associated with recidivism. Like its other findings of fact, the district court’s determination that the State could fashion a reasonably safe release plan was not “clearly erroneous.”
The idea that the district court’s public safely determination was a finding of “fact” entitled to deferential review caused Justice Scalia to suffer an uncharacteristic loss of composure. Deference is due factfinders because they make “determination[s] of past or present facts” based on evidence such as live eyewitness testimony, the quality of which they are “in a better position to evaluate” than are appellate judges confined to a “cold record,” he explained. The public-safety finding of the three-judge district court, in contrast, consisted of “broad empirical predictions necessarily based in large part upon policy views.” “The idea that the three District Judges in this case relied solely on the credibility of the testifying expert witnesses is fanciful,” Scalia thundered.
Justice Scalia’s reaction to the majority’s reasoning in Plata is reminiscent of Wechsler’s to the Court’s in Brown. Like Scalia, Wechsler had questioned whether the finding in question—that segregated schools “retard the educational and mental development” of African American children—could bear the decisional weight that the Court was putting on it. But whereas Wechsler had only implied that the Court was hiding its moral-judgment light under an empirical basket—“I find it hard to think the judgment really turned upon the facts [of the case]”—Scalia was unwilling to bury his policymaking accusation in a rhetorical question. “Of course they [the members of the three-judge district court] were relying largely on their own beliefs about penology and recidivism” when they found that release was consistent with—indeed, might even enhance—public safety, Scalia intoned. “And of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “[I]t is impossible for judges to make ‘factual findings’ without inserting their own policy judgments, when the factual findings are policy judgments.”
Justice Scalia’s dissent is also akin to the reaction to “empirical factfinding” in the Supreme Court’s abortion jurisprudence. Justice Blackmun’s majority opinion in Roe v. Wade cited “medical data” supplied by “various amici” to demonstrate that “[m]odern medical techniques” had dissolved the state’s historic interest in protecting women’s health. “[T]he now-established medical fact . . . that until the end of the first trimester mortality in abortion may be less than mortality in normal childbirth” supported recognition of an unqualified right to abortion in that period. Ely, among others, challenged the Court’s empirics: “This [the medical safety of abortions relative to childbirth] is not in fact agreed to by all doctors—the data are of course severely limited—and the Court's view of the matter is plainly not the only one that is ‘rational’ under the usual standards.” In any case, “it has become commonplace for a drug or food additive to be universally regarded as harmless on one day and to be condemned as perilous on the next”—so how could “present consensus” among medical experts plausibly ground a durable constitutional right?
It can’t. “[T]ime has overtaken some of Roe’s factual assumptions,” the Court noted in Planned Parenthood of Southeastern Pennsylvania v. Casey. “[A[dvances in maternal health care allow for abortions safe to the mother later in pregnancy than was true in 1973, and advances in neonatal care have advanced viability to a point somewhat earlier.” Accordingly, culturally fueled enactments of and challenges to abortion laws continue—repeatedly confronting the Justices with new empirical questions to which their answers are denounced as motivated by “personal values.” * * *
The only citizens who are likely to see the Court’s decision as more authoritative and legitimate when it resorts to empirical fact-finding in culturally charged cases are the ones whose cultural values are affirmed by the outcome. * * *
This factionalized environment incubates collective cynicism—about both the political neutrality of courts and about the motivations behind empirical arguments in policy discourse generally. Indeed, Justice Scalia’s extraordinary dissent in Plata synthesizes these two forms of skepticism.
It was “fanciful,” Scalia asserted, to think that the three district court judges “relied solely on the credibility of the testifying expert witnesses.” One might, at first glance, see him as merely rehearsing his standard diatribe against “judicial activism.” But this is actually a conclusion that Scalia deduces from premises—ones that don’t enter into his standard harangue—about the nature of empirical evidence and policymaking. The experts’ testimony, he explains, dealt with “broad empirical predictions”—ones akin to whether “deficit spending will . . . lower the unemployment rate,” or whether “the continued occupation of Iraq will decrease the risk of terrorism.” For Scalia, the beliefs one forms on the basis of that sort of evidence are “inevitably . . . based in large part upon policy views.” It follows that “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “I am not saying,” Justice Scalia stresses, “that the District Judges rendered their factual findings in bad faith.” “I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” when assessing empirical evidence relating to the consequences of governmental action. , when the factual findings are policy judgments.”
In effect, Scalia is telling us to wise up, not to be snookered by the Court. Sure, people claim that their “policy positions” on matters such as crime control, fiscal policy, and national security are based on empirical evidence. But we all know that things are in fact the other way around: what one makes of empirical evidence is “inevitably” and “necessarily based . . . upon policy views.” At one point, Scalia describes the district court judges as having “dress[ed]-up” their “policy judgments” as “factual findings.” But those judges weren’t, in his mind, doing anything different from what anyone “inevitably” does when making “broad empirical predictions”: those sorts of “factual findings are policy judgments.” Empirical evidence on the consequences of public policy should be directed to “legislators and executive officials” rather than “the Third Branch,” Scalia insists. The reason, though, isn’t that the former are better situated to draw reliable inferences from the best available data. On the contrary, it is that it is a conceit to think that reliable inferences can possibly be drawn from empirical evidence on policy consequences—and so “of course” it is the “policy preferences” of the majority, rather than those of unelected judges, that should control.
It is hard to say what is more extraordinary: the substance of Scalia’s position or the knowing tone with which he invites us to credit it. One might think it would be shocking to see a Justice of the Supreme Court so brazenly deny the intention (capacity even) of democratically accountable officials to make rational use of science to promote the common good. But Scalia could not expect his logic to persuade unless he anticipated that readers would readily concur (“of course”) that empirical arguments in policy debate are a kind of charade.
Scalia, of course, had good reason to expect such assent. His argument reflects the perspective of someone inside the cogntively illiberal state—who senses that motivated reasoning is shaping everyone else’s perceptions, and who accepts that it must also be shaping his, even if at any given moment he is unaware of its influence. We have all experienced this frame of mind. The critical question, though, is whether we really believe that what we are experiencing when we feel this way is inevitable and normal—a style of collective engagement with empirical evidence that should in fact be treated as normative, as Scalia asserts, for the performance of our institutions. I don’t think that we do . . . .
Will people who are culturally predisposed to reject human-caused climate change *believe* "97% consensus" social marketing campaign messages? Nope.
I’ve done a couple of posts recently on the latest CCP/APPC study on climate-science literacy.
The goal of the study was to contribute to development of a successor to “OCSI_1.0,” the “Ordinary Climate Science Intelligence” assessment (Kahan 2015). Like OCSI_1.0, OCSI_2.0 is intended to disentangle what ordinary members of the public “know” about climate science from their identity-expressive cultural predispositions, which is what items relating to “belief” in human-caused climate change measure.
In previous posts, I shared data, first, on the relationship between perceptions of scientific consensus, partisanship, and science comprehension; and second on the specific beliefs that members of the public, regardless of partisanship, hold about what climate scientists have established.
As pointed out in the last post, people with opposing cultural outlooks overwhelmingly agree about what “climate scientists think” on numerous specific propositions relating to the causes and consequences of human-caused climate change.
E.g., ordinary Americans—“liberal” and “conservative”—overwhelmingly agree that “climate scientists” have concluded that “human-caused global warming will result in flooding of many coastal regions.” True enough.
But they also agree, overwhelmingly, that climate scientists have concluded that “the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of skin cancer in human beings” and stifle “photosynthesis by plants.” Um, no.
These responses suggest that ordinary members of the public (again, regardless of their political orientation and regardless of whether they “believe” in climate change) get the basic gist of the weight of the evidence on human-caused global warming—viz., that our situation is dire—but have a pretty weak grasp of the details.
These items are patterned on science-literacy ones used to unconfound knowledge of evolutionary science from the identity-expressive answers people give to survey items on “belief” in human evolution. By attributing propositions to “climate scientists,” these questions don’t connote the sort of personal assent or agreement implied by “climate change belief items.”
Such questions thus avoid forcing respondents to choose between revealing what they “know” and expressing “who they are” as members of cultural groups whose identity is associated with pro- or con- attitudes toward assertions that human-caused climate change is putting society at risk.
The question “is there scientific consensus on climate change,” in contrast, doesn’t avoid forcing respondents to choose between revealing what they know and expressing who they are.
Accordingly, being perceived to hold beliefs at odds with the best available scientific evidence marks one out as an idiot. A familiar idiom in the discourse of contempt, the accusation that one’s cultural group (definite in terms of political outlooks, religiosity, etc.) is “anti-science” is a profound insult.
Thus, for someone who holds a cultural identity expressed by climate skepticism, a survey item equivalent to “true or false—there’s expert scientific consensus that human beings are causing global warming” is tantamount to the statement “well, you and everyone you respect are genuine morons—isn’t that so?”
People with that identity predictably answer no, there isn’t scientific consensus on global warming—because that question, unlike more particular ones relating to what “climate scientists believe,” measures who they are, not what they know (or think they know) about science’s understanding of the impact of human activity on climate change.
Messaging "scientific consensus" actually reinforces the partisan branding of positions on climate change, and thus frustrates efforts to promote public engagement with the best available evidence on how climate change is threatening their well-being.
Or that’s how I understood the best available evidence before conducting this study.
But maybe I’m wrong. If I am, I’d want to know that; and I’d want others to know it, too, particularly insofar as I’ve made my findings in the past known and have reason to think that people making practical decisions—important ones—might well be relying on them.
So in addition to collecting data on what people “believe” about human-caused global warming and on what they perceive climate scientists to believe, we showed study subjects (members of a large, nationally representative sample) an example of the kind materials featured in “97% consensus” social-marketing campaigns.
Specifically, we showed them this graphic, which was prepared for the AAAS by researchers who advised them that disseminating it would help to “increase acceptance of human caused climate change.”
We then simply asked those who had been shown the AAAS message “do you believe the statement '97% of climate scientists have concluded that human activity is causing global climate change' ”?
Overall, only 55% of the subjects said “yes.”
That would be a great showing for a candidate in the New Hampshire presidential primary. But my guess is that AAAS, the nation’s premier membership association for scientists, would not be very happy to learn that 45% of those who were told what the organization has to say about the weight of scientific opinion on one of the most consequential science issues of our day indicated that they thought AAAS wasn't giving them the straight facts.
What’s more, we know that the percentage of people who already believe in human-caused climate change is about 55%, and that the issue is one characterized by extreme political polarization.
So it's pretty obvious that if one is genuinely trying to gauge the potential effectiveness of this “messaging strategy,” one should assess what impact it will have on people whose political outlooks predispose them not to believe in human-caused climate change.
Here’s the answer:
Basically, the more conservative a person is, the less likely that individual is to believe the AAAS's magical "science communication" pie chart.
Unsurprisingly, this resistance to accepting the AAAS “message” is most intense among white male conservatives, the group in which denial of climate change is strongest (McCright & Dunlap 2012).
Or really just to make things simple, the only people inclined to believe the science communication being "socially marketed" in this way are those who are already inclined to believe (and almost certainly already do believe) in human-caused climate change.
Could this really be a surprise? By now, nearly a decade after the first $300 million "consensus" marketing campaign, those who reject climate change are surely very experienced at discounting the credibility of those who are "marketing" this "message."
Now, remember, these are the same respondents who, regardless of their political outlooks, overwhelmingly agree with propositions attributing to “climate scientists” all manner of dire prediction, true or false, about the impact of human-caused climate change.
There's a straightforward explanation for these opposing reactions.
People understand agreeing with fine-grained, particular test items to convey their familiarity with what climate scientists are saying.
They understand accepting “97% consensus messaging” as assenting to the charge that they and others who share their cultural identity are cretins, morons—socially incompetent actors worthy of ridicule.
Far from promoting acceptance of scientific consensus by persons with this identity, the contempt exuded by this form of "messaging" reinforces the resonances that make climate skepticism such a potent symbol of commitment to their group.
It’s patently ridiculous to think that “97% messaging” will change the minds of rather than antagonize these individuals, who make up the bulk of the climate-skeptical population.
Indeed, the probability that a conservative Republican who rejects human-caused climate change will believe the AAAS message is lower than the probability that he or she will already believe that there’s scientific consensus on climate change.
This “message” was one designed by social marketers who produced research that they characterize as showing that 97% consensus messaging “increased belief in climate change” in a U.S. general population sample.
Except that’s not what the researchers’ studies found. The "97% message" increased study subjects' estimates of the precise numerical percentage of climate scientists who subscribe to the consensus position. But the researchers did not find an increase in the proportion of study subjects who said they themselves "believe" human activity is causing climate change.
Empirical research is indeed essential to promoting constructive public engagement with scientific consensus on climate change.
But studies can do that only if researchers report all of their findings, and describe their results in a straightforward and non-misleading way.
When, in contrast, science communication researchers treat their own studies as a form of “messaging,” they only mislead and confuse people who need their help.
McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, 1-16 (2012).
C'mon down! Let's talk about culture, rationality & the tragedy of the #scicomm commons today at Mizzou
If you can't make it, this will probably give you a decent idea of what I'm thinking of saying.
"They already got the memo" part 2: More data on the *public consensus* on what "climate scientists think" about human-caused global warming
Yesterday I shared some data on the extent to which ordinary members of the public are politically polarized both on human-caused global warming and on the nature of scientific consensus relating to the same.
I said I was surprised b/c there was less division over whether “expert climate scientists” agree that human behavior is causing the earth’s temperature to rise.
Because Americans-- particularly those who display the greatest proficiency in science comprehension-- are less likely to disagree on whether there's scientific consensus than on whether human beings are causing global warming, it's not very compelling to think confusion about the former proposition is the "cause" of the latter.
But there is still a huge amount of polarization on whether there is scientific consensus on human-caused climate change.
Answers to these two questions -- are humans causing climate change? do scientists believe that? -- are still most plausibly viewed as being caused by a single, unobserved or latent disposition: namvely, a general pro- or con- affective orientation toward "climate change" that reflects the social meaning positions on this issue has within a person's identity-defining affinity groups.
Or in other words, the questions "is human climate change happening" and "is there scientific consensus on human-caused climate change" both measure who a person is, politically speaking.
That's a different thing from what members of the public know about climate science. To measure that requires a valid climate-science comprehension instrument.
The study in which we collected these data was a follow up of an earlier CCP-APPC one that featured the “Ordinary Climate Science Intelligence” assessment, or OCSI_1.0.
The goal of OCSI_1.0 was to disentangle the measurement of “who people are”—the responses toward climate change that evince the affective stance toward climate change characteristic of their cultural group—from “what they know” about climate science.
The current study is part of the effort to develop OSI_2.0, the aim of which is to discern differences across a larger portion of the range of knowledge levels within the general population.
Here is how 600 subjects U.S. adults drawn from a nationally representative panel) responded to some of the OSI_2.0 candidate items.
For me, these are the key points:
First, there’s barely any partisan disagreement over what climate scientists believe about the specific causes and consequences of human-caused climate change.
Sure, there’s some daylight between the response of the left-leaning and right-leaning respondents. But the differences are trivial compared to the ones in these same respondents’ beliefs about both the existence of climate change and the nature of scientific consensus.
There is “bipartisan” public consensus in perceptions of what climate scientists “know,” with minor differences only in the intensity with which respondents of opposing outlooks hold those particular impressions.
Second, ordinary members of the public, regardless of what they "believe" about human-caused climate change, know pitifully little about the basic causes and consequences of global warming.
Yes, a substantial majority of respondents, of diverse political views, know that climate scientists understand fossil-fuel CO2 emissions to be warming the planet, and that climate scientists expect rising temperatures to result in flooding in many regions.
But they also mistakenly believe that, “according to climate scientists, the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of leukemia” and “skin cancer in human beings, and “reduce photosynthesis by plants.”
They think, incorrectly, that climate scientists have determined that “a warmer climate over the next few decades will increase water evaporation, which will lead to an overall decrease of global sea levels.”
“Republican” and “Democrat” alike also mistakenly attribute to “climate scientists” the proposition that “human-caused global warming has increased the number of tornadoes in recent decades,” a claim that Bill Nye “science guy” believes but that actual climate scientists don’t, and in fact regularly criticize advocates for leaping up to assert every time a tornado kills dozens of people in one of the plains states.
Third, the overwhelming majority of ordinary citizens, regardless of their political persuasions, agree that climate scientists have concluded that global warming is putting human beings in grave danger.
The candidate OSI_2.0 items (only a portion of which are featured here) form two scales.
When one counts up the number of correct responses, OSI_2.0 measures how much people genuinely know about the basic causes and consequences of human-caused global warming. You can figure out that by scoring.
Alternatively, when one counts up the number of responses, correct or incorrect, that evince a perception of the risks that human-caused climate change poses, OSI_2.0 measures how dreaded climate change is as a societal risk.
No matter what they “believe” about human-caused climate change, very few people do well on the first, knowledge-based scale.
And no matter what they “believe” about human-caused climate change, the vast majority of them score extremely high on the second, dreadedness scale.
None of this should come as a surprise. This is exactly the state of affairs revealed by OSI_1.0.
Now in fact, one might think that it’s perfectly fine that ordinary citizens score higher on the “climate change dredadedness” scale than they do on the “climate change science comprehension” one. Ordinary citizens only need to know the essential gist of what climate scientists are telling them--that global warming poses serious risks that threaten things of value to them, including the health and prosperity of themselves and others; it’s those who ordinary citizens charge with crafting effective solutions (ones consistent with the democratic aggregation of diverse citizens' values) who have to get all the details straight.
The problem though is that democratic political discourse over climate change (in most but not all places) doesn’t measure either what ordinary people know or what they feel about climate change.
It measures what the item on “belief in” climate change does: who they are, whose side they are on, in an ugly, pointless, cultural status competition being orchestrated by professional conflict entrepreneurs.
The “science communication problem” for climate change is how to steer the national discussion away from the myriad actors-- all of them--whose style of discourse creates these antagonistic social meanings.
“97% consensus” social marketing campaigns (studies with only partially and misleadingly reported results notwithstanding) aren’t telling ordinary Americans on either side of the “climate change debate” anything they haven't already heard & indeed accepted: that climate scientists believe human-caused global warming is putting them in a position of extreme peril.
All the "social marketing" of "scientific consensus" does is augment the toxic idioms of contempt that are poisoning our science communication environment.
The unmistkable social meaning of the material featuring this "message" (not to mention the cultural conflict bottom-feeders who make a living "debating" this issue on talk shows) is that "you and people who share your identity are morons." It's not "science communication"; it's a clownish bumper sticker that says, "fuck you."
It is precisely because of the assaultive, culturally partisan resonances that this "message" conveys that people respond to the question "is there scientific consensus on global warming?" by expressing who they are rather than what they know about climate change risks.
More on that “tomorrow.”
As their science comprehension increases, do members of the public (a) become more likely to recognize scientific consensus exists on human-caused climate change; (b) become more politically polarized on whether human-caused climate change is happening; or (c) both?!
The study is a follow up to an earlier CCP/APPC study, which investigated whether it is possible to disentangle what people know about climate science from who they are.
“Beliefs” about human-caused global warming are an expression of the latter, and are in fact wholly unconnected to the former. People who say they “don’t believe” in human-caused climate change are as likely (which is to say, extremely likely) to know that human-generated CO2 warms the earth’s atmosphere as are those who say they do “believe in” human-caused climate change.
They are also both as likely-- which is to say again, extremely likely--to harbor comically absurd misunderstandings of climate science: e.g., that human generated CO2 emissions stifles photosynthesis in plants, and that human-caused global warming is expected to cause epidemics of skin cancer.
In other words, no matter what they say they “believe” about climate change, most Americans don’t really know anything about the rudiments of climate science. They just know -- pretty much every last one of them--that climate scientists believe we are screwed.
The small fraction of those who do know a lot—who can consistently identify what the best available evidence suggests about the causes and consequences of human-caused climate change—are also the most polarized in their professed “beliefs” about climate change.
The central goal of this study was to see what “belief in scientific consensus” measures—to see how it relates to both knowledge of climate science and cultural identity.
I’ll get to what we learned about that "tomorrow."
But today I want to show everybody something else that surprised the bejeebers out of me.
Usually when I & my collaborators do a study, we try to pit two plausible but mutually inconsistent hypotheses against each other. I might expect one to be more likely than the other, but I don’t expect anyone including myself to be really “surprised” by the study outcome, no matter what it is.
Many more things are plausible than are true, and in my view, extricating the latter from the sea of the former—lest we drown in a sea of “just so” stories—is the primary mission of empirical studies.
But still, now and then I get whapped in the face by something I really didn’t see coming!
This finding is like that.
But to set it up, here's a related finding that's interesting but not totally shocking.
It’s that the association between identity and perceptions of scientific consensus on climate change, while plenty strong, is not as strong as the association between identity and “beliefs” in human-caused climate change.
This means that “left-leaning” individuals—the ones predisposed to believe in human-caused climate change—are more likely to believe in human caused climate change than to believe there is scientific consensus, while the right-leaning ones—the ones who are predisposed to be skeptical—are more likely to believe that there is scientific consensus that humans are causing climate change than to actually “believe in” it themselves.
Interesting, but still not mind-blowing.
Here’s the truly shocking part:
First, as science comprehension goes up, people become more polarized on climate change.
Still not surprising; that’s old, old, old, old news.
But second, as science comprehension goes up, so does the perception that there is scientific consensus on climate change—no matter what people’s political outlooks are!
Accordingly, as relatively “right-leaning” individuals become progressively more proficient in making sense of scientific information (a facility reflected in their scores on the Ordinary Science Intelligence assessment, which puts a heavy emphasis on critical reasoning skills), they become simultaneously more likely to believe there is “scientific consensus” on human-caused climate change but less likely to “believe” in it themselves!
Whoa!!! What gives??
One thing that is clear from these data is that it’s ridiculous to claim that “unfamiliarity” with scientific consensus on climate change “causes” non-acceptance of human-caused global warming.
But that shouldn’t surprise anyone. The idea that public conflict over climate change persists because, even after years and years of “consensus messaging” (including a $300 million social-marketing campaign by Al Gore’s “Alliance for Climate Protection”), ordinary Americans still just “haven’t heard” yet that an overwhelming majority climate scientists believe in AGW is patently absurd.
(Are you under the impression that there are studies showing that telling someone who doesn't believe in climate change that “97% of scientists accept AGW” will cause him or her to change positions? No study has ever found that, at least with a US general public sample. All the studies in question show -- once the mystifying cloud of meaningless path models & 0-100 "certaintly level" measures has been dispelled-- is that immediately after being told that “97% of climate scientists believe in human-caused climate change,” study subjects will compliantly spit back a higher estimate of the percentage of climate scientists who accept AGW. You wouldn't know it from reading the published papers, but the experiments actually didn’t find that the “message” changed the proportion of subjects who said they “believe in" human caused climate change....)
These new data, though, show that acceptance of “scientific consensus” in fact has a weaker relationship to beliefs in climate change in right-leaning members of the public than it does in left-leaning ones.
That I just didn’t see coming.
I can come up w/ various “explanations,” but really, I don’t know what to make of this!
Actually, in any good study the ratio of “weird new puzzles created” to “existing puzzles (provisionally) solved” is always about 5:1.
That’s great, because it would be really boring to run out of things to puzzle over.
And it should go without saying that learning the truth and conveying it (all of it) accurately are the only way to enable free, reasoning people to use science to improve their lives.
Is the controversy over climate change a "science communication problem?" Jon Miller's views & mine too (postcard from NAS session on "science literacy & public attitudes toward science")
Gave presentation yesterday before the National Academy of Sciences Committee that is examining the relationship between "public science literacy" & "public attitudes toward science.' It's really great that NAS is looking at these questions & they've assembled a real '27 Yankees quality lineup of experts to do it.
Really cool thing was that Jon Miller spoke before me & gave a masterful account of the rationale and historical development of the "civic science literacy" philosophy that has animated the NSF Indicators battery.
There was zero disagreement among the presenters-- me & Miller, plus Philip Kitcher, who advanced an inspiring Dewian conception of science literacy -- that the public controversy over climate science is not grounded in a deficit in public science comprehension.
It's true that the public doesn't know very much (to put it mildly) about the rudiments of climate science. But that's true of those on both sides, and true too in all the myriad areas in which there isn't any controversy over important forms of decision-relevant science in which there is no controversy and in which the vast majority of ordinary citizens nevertheless recognize and make effective use of the best available evidence.
I think I agree but would put matters differently.
Miller was arguing that the source of enduring conflict is not a result of the failure of scientists or anyone else to communicate the underlying information clearly but a result of the success of political actors in attaching identity-defining meanings to competing positions, thereby creating social & psychological dynamics that predictably motivate ordinary citizens to fit their beliefs to those that predominate within their political groups.
That's the right explanation, I'd say, but for me this state of affairs is still a science communication problem. Indeed, the entanglement of facts that admit of scientific inquiry & antagonistic social meanings --ones that turn positions on them into badges of group membership & identity-- is the "science communication problem" for liberal democratic societies. Those meanings, I've argued, are a form of "science communication environment pollution," the effective avoidance and remediation of which is one of the central objects of the "science of science communication."
I think the only thing at stake in this "disagreement" is how broadly to conceive of "science communication." Miller, understandably, was using the term to describe a discrete form of transaction in which a speaker imparts information about science to a message recipient; I have in mind the less familiar notion of "science communication" as the sum total of processes, many of which involve the tract orienting influence of social norms, that serve to align individual decisionmaking with the best available evidence, the volume of which exceeds the capacity of ordinary individuals to even articulate much less deeply comprehend.
But that doesn't mean it exceeds their capacity to use that evidence, & in a rational way by effectively exercising appropriately calibrated faculties of recognition that help them to discern who knows what about what. It's that capacity that is disrupted, degraded, rendered unreliable, by the science-communication environment pollution of antagonistic social meanings.
I doubt Miller would disagree with this. But I wish we'd had even more time so that I could have put the matter to him this way to see what he'd say! Kitcher too, since in fact the relationship of public science comprehension to democracy is the focus of much of his writing.
Maybe I can entice one or the other or both into doing a guest blog, although in fact the 14 billion member audience for this blog might be slightly smaller than the ones they are used to addressing on a daily basis.
During my stay here at APPC, we'll be having weekly "science of science communication lab" meetings to discuss our ongoing research projects. I've decided to post a highlight or two from each meeting.
We just had the 2nd, which means I'm one behind. I'll post the "session 1 highlight" "tomorrow."
One of the major projects for the spring is "Study 2" in the CCP/APPC Evidence-based Science Filmmaking Initiative. For this session, we hosted two of our key science filmmaker collaborators, Katie Carpenter & Laura Helft, who helped us reflect on the design of the study.
One thing that came up during the session was the distribution of “science curiosity” in the general population.
The development of a reliable and valid measure of science curiosity—the “Science Curiosity Scale” (SCS_1.0)—was one of the principal objectives of Study 1. As discussed previously, SCS worked great, not only displaying very healthy psychometric properties but also predicting with an admirable degree of accuracy engagement with a clip from Your Inner Fish, ESFI collaborator Tangled Bank Studio’s award-winning film on evolution.
Indeed, one of the coolest findings was that individuals who were comparably high in science curiosity (as measured by SCS) were comparably engaged by the clip (as measured by view time, request for the full documentary, and other indicators) irrespective of whether they said they “believed in” evolution.
Evolution disbelievers who were high in science curiosity also reported finding the clip to be an accurate and convincing account of the origins of human color vision.
But it’s natural to wonder: how likely is someone who disbelieves in evolution to be high in science curiosity?
The report addresses the distribution of science curiosity among various population subgroups. The information is presented in a graphic that displays the mean SCS scores for opposing subgroups (men and women, whites and nonwhites, etc).
Scores on SCS (computed using Item Response Theory) are standardized. That is, the scale has a mean of 0, and units are measured in standard deviations.
The graphic, then, shows that in no case was any subgroup’s mean SCS score higher or lower than 1/4 of a standard deviation from the sample mean on the scale. The Report suggested that this was a reason to treat the differences as so small as to lack any practical importance.
Indeed, the graphic display was consciously selected to help communicate that. Had the Report merely characterized the scores of subgroups as “significantly different” from one another, it would have risked provoking the Pavlovian form of inferential illiteracy that consists in treating “statistically significant” as in itself supporting a meaningful inference about how the world works, a reaction that is very very hard to deter no matter how hard one tries!
By representing the scores of the opposing groups in relation to the scale's standard-deviation units on the y-axis, it was hoped that reflective readers would discern that the differences among the groups were indeed far too small to get worked up over—that all the groups, including the one whose members were above average in science comprehension (as measured by the Ordinary Science Intelligence assessment), had science curiosity scores that differed only trivially from the population mean (“less than 1/4 of a standard deviation--SEE???”).
But as came up at the session, this graphic is pretty lame.
Even most reflective people don’t have good intuitions about the practical import of differences in fractions of standards of a deviation. Aside from being able to see that there's not even a trace of difference between whites & nonwhites, readers can still see that there are differences in science curiosity levels & still wonder exactly what they mean in practical terms.
So what might work better?
Why—likelihood ratios, of course! Indeed, when Katy Barnhart from APPC spontaneously (and adamantly) insisted that this would be a superior way to graph this data, I was really jazzed!
I’ve written several posts in the last yr or so on how useful likelihood ratios are for characterizing the practical or inferential weight of data. In the previous posts, I stressed that LRs, unlike “p-values,” convey information on how much more consistent the observed data is with one rather than another competing study hypothesis.
Here LRs can aid practical comprehension by telling us the relative probabilities of observing members of opposing groups at any particular level of SCS.
In the graphics below, the distribution of science curiosity within opposing groups is represented by probability density distributions derived from the means and standard deviations of the groups’ SCS scores.
As discussed in previous posts, study hypotheses can be represented this way: because any study is subject to measurement error, a study hypothesis can be converted into a probability density distribution of "predicted study outcomes" in which the “mean” is the predicted result and the standard error the one associated with the measurement precision of the study instrument.
If one does this, one can determine the “weight of the evidence” that a study furnishes for one hypothesis relative to another by comparing how likely the observed study result was under each of the the probability-density distributions of “predicted outcomes” associated with the competing hypotheses.
This value—which is simply the relative “heights” of the points on which the observed value falls on the opposing curves—is the logical equivalent of the Bayesian likelihood ratio, or the factor in proportion to which one should update one’s existing assessment of the probability of some hypothesis or proposition.
Here, we can do the same thing. We know the mean and standard deviations for the SCS scores of opposing groups. Accordingly, we can determine the relative likelihoods of members of opposing groups attaining any particular SCS score.
An SCS score that places a person at the 90th percentile is about 1.7x more likely if someone is “above average” in science comprehension (measured by the OSI assessment) than if someone is below average.
There is a 1.4x greater chance that a person will score at the 90th percentile if that person is male rather than female, and a 1.5x greater chance that the person will do so if he or she has political outlooks to the "left" of center rather than the "right" on a scale that aggreates responses to a 5-point liberal-conservative ideology item and a 7-point party-identification item.
There is a comparable relative probability (1.3x) that a person will score in the 90th percentile of SCS if he or she is below average rather than above average in religiosity (as measured by a composite scale that combines response to items on frequency of prayer, frequency of church attendance, and importance of religion in one’s life).
Accordingly, if we started with two large, equally sized groups of believers and nonbelievers and it just so turned out that there were 100 total from the two groups who had SCS scores in the 90th percentile for the general population, then we’d expect 66 to be evolution believers and 33 of them to be nonbelievers (1 would a Pakistani Dr).
When I put things this way, it should be clear that knowing how much more likely any particular SCS score is for members of one group than members of another doesn’t tell us either how likely any group's members are to attain that score or how likely a person with a particular score is to belong to a any group!
You can figure that out, though, with Bayes’s Theorem.
If I picked out a person at random from the general population, I'd put the odds at about 11:9 that he or she "believes in" evolution, since about 45% of the population answers "false" when responding to the survey item "Human beings, as we know them, evolved from another species of animal," the evolution-belief item we used.
If you told me the person was in the 90th percentile of SCS, I'd then revise upward my estimate by a factor of 2, putting the odds that he or she believes in evolution at 22:9, or about 70%.
Or if I picked someone out a random from the population, I’d expect the odds to be 9:1 against that person scoring in the 90th percentile or higher. If I learned the individual was above average in science comprehension, I’d adjust my estimate of the odds upwards to 9:1.7 (about 16%); similarly, if learned the individual was below average in science comprehension, I’d adjust my estimate downwards to 15.3:1 (about 6%).
Actually, I’d do something slightly more complicated than this if I wanted to figure out whether the person was in the 90th percentile or above. In that case, I’d in fact start by calculating not the relative probability of members of the two groups scoring in the 90th percentile but the relative probability of them scoring in the top 10% on SCS, and use that as my likelihood ratio, or the factor by which I update my prior of 9:1. But you get the idea -- give it a try!
So, then, what to say?
I think this way of presenting the data does indeed give more guidance to a reflective person to gauge the relative frequency of science curious individuals across different groups than does simply reporting the mean SCS scores of the group members along with some measure of the precision of the estimated means—whether a “p-value” or a standard error or a 0.95 CI.
It also equips a reflective person to drawn his or her own inferences as to the practical import of such information.
I myself still think the differences in the science curiosity of members of the indicated groups, including those who do and don’t believe in evolution, is not particularly large and definitely not practically meaningful.
But actually, after looking at the data, I do feel that there's a bigger disparity in science curiosity than there should be among citizens who do & don't "believe in" evolution. A bigger one than there should be among men & women too. Those differences, even though small, make me anxious that there's something in the environment--the science communication environment--that might well be stifling development of science curiosity across groups.
No one is obliged to experience the wonder and awe of what human beings have been able to learn through science!
But everyone in the Liberal Republic of Science deserves an equal chance to form and satisfy such a disposition in the free exercise of his or her reason.
Obliterating every obstacle that stands in the way of culturally diverse individuals achieving that good is the ultimate aim of the of the project of which ESFI is a part.
Some more data from latest CCP/Annenberg Public Policy Center's latest "science of science communication" study.
I was curious, among other things, about what the current state of political divisions might be on the risk of the HPV vaccine.
At one point—back in 2006-10, I’d say—the efficacy and safety of the vaccine was indeed culturally contested.
The public was polarized; and state legislatures across the nation ended up rejecting the addition of the vaccine to the schedule of mandatory vaccinations for school enrollment, the first (and only) time that has happened (on that scale) for a vaccine that the CDC had added to the schedule of recommended universal childhood immunizations.
I’ve discussed the background at length, including the decisive contribution that foreseeable, avoidable miscues in the advent of the vaccine made to this sad state of affairs.
I was wondering, though, if things had cooled off.
There is still low HPV uptake. But it’s unclear what the cause is.
Maybe the issue is still a polarizing one.
But even without continuing controversy one would expect rates to be lower insofar as the vaccine still isn’t mandatory outside of DC, Virginia and (recently) Rhode Island.
In addition, there’s reason to believe that pediatricians are gun shy to recommend the vaccine b/c of their recollection of getting burned when the vaccine was introduced. Their reticence might have outlived the continuing public ambivalence, and now be the source of lower-than-optimal coverage.
So I plotted perceptions of various risk, measured with the Industrial Strength Risk Perception measure, in relation to right-left political outlooks.
I put the biggies—global warming, and fracking (plus terrorism, since I mentioned that yesterday and the issue generated some discussion)--in for comparison.
Also, childhood vaccinations, which as, I've discussed in the past, do not generate a particularly meaningful degree of polarization.
Obviously HPV is much less polarizing than the “biggies.”
But the degree of division on HPV doesn’t strike me, at least, as trivial.
Political division on the risks posed by other childhood vaccines is less intense, and still trivial or pretty close to it, particularly insofar as risk is perceived as “low” pretty much across the spectrum. In truth, though, it strikes me as a tad bigger than what I’ve observed in the past (that’s worrisome. . . .).
But that’s all I have to say for now!
What do other think?
Here, btw, are the wordings for the ISRPM items:
TERROR. Terrorist attacks inside the United States
FRACKING. “Fracking” (extraction of natural gas by hydraulic fracturing)
VACRISK. Vaccination of children against childhood diseases (such as mumps, measles and rubella)
HPV. Vaccinating adolescents against HPV (the human human papillomavirus)
GWARMING. Global warming
Weekend update: OMG-- we are now as politically polarized over cell phone radiation as over GM food risks!!!
Some "Industrial Strength Risk Perception Measure" readings from CCP/Annenberg Public Policy Center study administered this month:
Interesting but not particularly surprising that polarization over the risk associated with unlawful entry of immigrants rivals that on global warming, which has abated recedntly about as much as the pumping of CO2 into the atmosphere.
Interesting but not surprising to learn (re-learn, actually) that it's nonsense to say Americans are "more afraid of terrorism than climate change b/c the former is more dramatic, emotionally charged" etc. That trope, associated with the "take-heuristics-and-biases-add-water-and-stir" formula of "instant decision science," reflects a false premise: those predisposed to worry about climate change do in fct see the risk it poses as bigger than that posed by domestic terrorism.
And completely boring at this point to learn form the 10^7 time that there is no political division over GM food risk in the general public, despite the constant din in the media and even some academic commentary to this effect.
Consider this histogram:
The flatness of the distribution is the signature of the sheer noise associated with responses to GM food survey questions, the administration of which, as discussed seven billion times in this blog (once for every two regular blog subscribers!) is an instance of classic "non-opinion" polling.
Ordinary Americans--the ones who don't spend all day reading and debating politics (99% of them)-- just don't give GM food any thought. They don't know what GM technology is, that it has been a staple of US agricultural production for decades, and that it is in 80% of the foodstuffs they buy at the market.
They don't know that the House of Reps passed a bipartisan bill to preempt state-labelling laws, which special-interest groups keep unsucessfully sponsoring in costly state referenda campaigns, and that the Senate will almost surely follow suit, presenting a bill that University of Chicago Democrat Barrack Obama will happily sign w/o more than 1% of the U.S. population noticing (a lot of commentators don't even seem to realize how close this non-issue is to completely disappearing).
Why the professional conflict entrepreneurs have failed in their effort to generate in the U.S. the sort of public division over GM foods that has existed for a long time in Europe is really an interesting puzzle. It's much more interesting to try to figure out hypotheses for that & test them than to engage in a make-believe debate about why the public is "so worried" about them!
But neither that interesting question nor the boring, faux "public fear of GM foods" question was the focus of the CCP/APPC study.
Some other really cool things were.
we (my chief co-analyst & I) have arrived & resumed operations.
A short photojournal of our relocation process:
3. Taking a short break ....
In the last couple of posts (one on evolution believers' & nonbelievers' engagement with an evolution-science documentary, and another on measuring "science curiosity") I've summarized some of the findings from Study No. 1 of the Annenberg/CCP ESFI--"Evidence-based Science Filmmaking Initiative."
Those findings are described in more detail in a study Report, which also spells out the motivation for the study and its relation to ESFI overall.
Indeed, the Report is an unusual document--or at least an unusual sort of document to share.
It isn't styled as announcing to the world the "corroboration of" or "refutation" of some specified set of hypotheses. It is in fact an internal report prepared for consumption of the the investigators in an ongoing research project, one that is in fact at a very preliminary stage!
Why release something like that? Well, in part because even at this point in the investigation, we do think there are things to report that will be of interest to other scholars and reflective people generally, many of whom can be counted on to supply us w/ feedback that will itself make what we do next even more useful.
But in addition, one of the aims of the project, in addition to generating evidence relevant to questions of interest to professional science filmmakers, is to model the process of using evidence-based methods to answer those very questions.
As explained in the ESFI "main page," the project is itself meant to supply evidence relevant to the hypothesis that the methods distinctive of the science of science communication can make a positive contribution to the craft of science filmmaking by furnishing those engaged in it with the information relevant to the exercise of their professional judgment.
Of course, those engaged in ESFI, including its professional science communication members, believe (with varying levels of confidence!), that in fact the science of science communication can make such a contribution; but of course, too, others, including other professional science filmmakers, are likely to disagree with this conjecture.
I wouldn't say "no point arguing about it" just b/c reasonable, and informed, people can disagree.
But I would say that these are exactly the conditions in which the argument will proceed in a more satisfactory way with additional information of the sort that can be generated by science's signature methods of disciplined observation, reliable measurement, and valid inference.
Hence ESFI: Let's do it -- and see what a collaboration between professional science filmmakers and allied communicators, on the 1 hand, and & "scientists of science communication" on the other, produces. Then, on the basis of that evidence, those who are involved in science filmmaking can use their own reason to judge for themselves what that evidence signifies, and update accordingly their assessments of the utility of integrating the science of science communication into the craft of science filmmaking (not to mention related forms of science communication, like science journalism).
Precisely b/c the Report is an internal research document that takes stock of early findings in a multi-stage project, it furnishes a glimpse of the project in action. It thus gives those who might consider using such methods a chance to form a more concrete picture of what these practices look like, and a chance to use their own experience-informed imaginations to assess what they might do if they could add evidence-based methods to their professional tool kits.
But of course this is only the start-- only the first Report, both of results and of the experience of doing evidence-based filmmaking.
A. Overview and summary conclusions
This report summarizes the preliminary conclusions of Study No. 1 in the Annenberg/CCP “Evidence-based Science Filmmaking Initiative.” The goal of the initiative is to promote the integration of the emerging science of science communication into the craft of science filmmaking. Study No. 1 involved an exploratory investigation of viewer engagement with an excerpt from Your Inner Fish, a documentary on human evolution.
The study had two objectives.
One was to gather evidence relevant to an issue of debate among science filmmakers: what explains the perceived demographic homogeneity of the audience for high-quality documentaries featured on NOVA, Nature, and similar PBS shows? Is the answer the distribution of tastes for learning about scientific discovery in the general population, or instead some feature of those shows collateral to their science content that makes them uncongenial to individuals who subscribe to certain cultural styles?
The other study objective was to model how evidence-based methods could be used by science filmmakers. Hard questions—ones for which the number of plausible answers exceeds the number of correct ones—are endemic to the activity of producing science films. By testing competing conjectures on an issue of consequence to their craft, Study No. 1 illustrates how documentary producers might use empirical methods to enlarge the stock of information pertinent to the exercise of their professional judgment in answering such questions.
Principal conclusions of Study No. 1 include:
1. By combining appropriately subtle self-report items with behavioral and performance-based ones, it is possible to construct a valid scale for measuring individuals’ general motivation to consume information about scientific discovery for personal satisfaction. Desirable properties of the “Science Curiosity Scale” (SCS) include its high degree of measurement precision, its appropriate relationship with science comprehension and other pertinent covariates, and (most importantly) its power to predict meaningful differences in objective manifestations of science curiosity.
2. By similar means, one can construct a satisfactory scale for measuring viewer engagement with material such as that featured in the YIF clip. Such a scale was again formed by combining self-report and objective measures, including duration of viewing time and requested access to the remainder of the documentary. Designated the “Engagement Index” (EI), the scale had the expected relationships with education and general science comprehension. The strongest predictor of EI was the study subjects’ SCS scores.
3. Engagement with the clip did not vary to a meaningful degree among subjects who had comparable SCS scores but opposing “beliefs” about human evolution. Evolution “believers” and “nonbelievers” with high SCS scores formed comparably positive reactions to the YIF clip. The show didn’t “convert” the latter. But like “believers” with high SCS scores, high-scoring “nonbelievers” were very likely to accept the validity of the science featured in the clip. This finding is consistent with research suggesting that professions of “disbelief” in evolution are an indicator of cultural identity that poses no barrier to engagement with scientific information on evolution, so long as that information itself avoids mistaking exacting professions of “belief” for communicating knowledge.
4. Engagement with the show did vary across culturally identifiable groups. The members of one cultural group, whose members are in fact distinguished in part by their pro-technology attitudes, appeared to display less engagement the clip than was predicted by their SCS scores. This finding furnishes at least some support for the conjecture that some fraction of the potential audience for science documentary programing is discouraged from viewing it by uncongenial cultural meanings collateral to the science content of such programming.
5. But additional, more fine-grained analysis of the data is necessary. In particular, the science-communication-professional members of the research team must formulate concrete, alternative hypotheses about the identity of culturally identifiable groups who might well be responding negatively to collateral cultural meanings in the clip. Those hypotheses can in turn be used by the science-of-science-communication team members to develop more fine-tuned cultural profiles that can be used to probe such conjectures.
6. Depending on the results of these additional analyses, next steps would include experimental testing that seeks to modify collateral meanings or cues in a manner that eliminates any disparity in engagement among individuals of diverse cultural identities who share a high level of curiosity about science.
Yesterday, I discussed how evolution "believers" and "nonbelievers" reacted to a cool evolution-science documentary. The data I described came from Study No. 1 of the Annenberg Public Policy Center/CCP "Evidence-based Science Filmmaking Initiative" (ESFI).
That data suggested that "belief" in evolution wasn't nearly as important to engagement with the documentary (Your Inner Fish, an award-winning film produced by ESFI collaborator Tangled Bank Studios) as was science curiosity.
Today I'll say a bit more about how we measured science curiosity.
Developing a valid and reliable science curiosity scale was one of the principal aims of Study No. 1. As conceptualized here, science curiosity is not a simple transient state (Loewenstein 1999) but instead a general disposition, variable in intensity across persons, that reflects the motivation to seek out and consume scientific information for personal pleasure.
Obviously, a measure of this disposition would furnish science journalists, science filmmakers, and related science-communication professionals with a useful tool for perfecting the appeal of their work to those individuals who value it the most. But it could also make myriad other contributions to the advancement of knowledge.
A valid science curiosity measure could be used to improve science education, for example, by facilitating investigation of the forms of pedagogy most likely to promote its development and harness it to promote learning (Blalock, Lichtenstin, Owen & Pruski 2008). Those who study the science of science communication (Fischhoff & Scheufele 2014; Kahan 2015) could also use a science curiosity measure to deepen their understanding of how public interest in science shapes the responsiveness of democratically accountable institutions to policy-relevant evidence.
Indeed, the benefits of measuring science curiosity are so numerous and so substantial that it would be natural to assume researchers must have created such a measure long ago. But the simple truth is that they have not.
“Science interest” measures abound. But every serious attempt to assess their performance has concluded that they are psychometrically weak and, more important, not genuinely predictive of what they are supposed to be assessing—namely, the disposition to seek out and consume scientific information for personal satisfaction (Blalock et al 2008; Osborne, Collins & Simons 2003).
ESFI’s “Science Curiosity Scale 1.0” (SCS_1.0) is an initial step toward filling this gap in the study of science communication. The items it comprises, and the process used to select (and combine) them, self-consciously address the defects in existing scales.
One of these is the excessive reliance on self-report measures. Existing scales relentlessly interrogate the respondents on the single topic of their own attraction to or aversion toward information on scientific discovery: “I am curious about the world in which we live,” “I find it boring to hear about new ideas,” “I get bored when watching science programs on TV,” etc. Items like these are well-known to elicit responses that exaggerate respondents’ possession of desirable traits or attributes.
To counteract this dynamic, SCS_1.0 disguises its objectives by presenting itself as a general “marketing” survey.
Individual self-report items relating specifically to science were thus embedded in discrete blocks or modules, each consisting of ten or more items relating to an array of “topics” that “some people are interested in, and some people are not.” Items were presented in random order, each on a separate screen.
There was thus no reason for subjects to suspect that their motivation to learn about science was of particular interest, nor any opportunity for them to adjust the responses across items in a manner that overstated their interest in it. A similar strategy was used to gather information on behavior reflecting such an interest, including visits to science museums, attendance at public science lectures, and the reading of books on scientific discovery.
SCS_1.0 also featured an objective performance measure.
Well into the survey, subjects were advised that we were interested in their reactions to a news story “of interest” to them. In order to assure that the story was one that in fact matched their interests, they were furnished with discrete news story sets, the shared subject matter of which was be identified by a header and reinforced by the individual story headlines and graphics. One set consisted of science stories; the others ones on popular entertainment, on sports, and on financial news.
Subjects, we anticipated, were likely to find the prospect of reading a story and answering questions about it burdensome. Accordingly, the selection of the science set rather than one of the others would be a valid indicator of genuine science interest . Responses to this task were then used to validate the self-reported interest items to help furnish assurance the genuineness of the latter.
When combined, the items displayed the requisite psychometric properties of a valid and reliable scale. Their unidimensional covariance structure warranted the inference that they were measuring the same latent disposition. Formed with item response theory, the composite scale weighted particular items in relation to the level of the disposition that responses to them evinced. The result was an index—SCS_1.0—that reflected a high degree of measurement precision along the entire population distribution of that trait (Embretson & Paul 2000).
As detailed in ESFI Study Report No. 1, subjects were instructed to watch a 10-minute clip from the science documentary Your Inner Fish. SCS_1.0 strongly predicted engagement with the clip as reflected not only in self-reported interest but also in objective measures such as duration of viewing time and subjects’ election (or not) to be furnished free access to the documentary as a whole.
SCS_1.0 is by no means understood to be an ideal science curiosity measure. Additional testing is necessary, both to assure the robustness of the scale and to refine its powers to discern the motivation to seek out and consume science information for pleasure.
Moreover, SCS_1.0 was self-consciously designed to assess this disposition in adult members of the public; variants would be appropriate for specialized populations including elementary or secondary school students.
But what SCS_1.0 does do, we believe, is initiate a process that there's every reason to believe will generate measures of genuine value to researchers interested in assessing science curiosity in the general public and in specialized subpopulations. The researchers associated with CCP’s ESFI and other evidence-based science communication initiatives are eager to participate in that process. But they are also eager to stimulate others to participate in it either by building on and extending SCS_1.0 or by developing alternatives that genuinely predict behavior that manifests the motivation to seek out and consume scientific information.
Existing “science interest” measures just don’t do that. SCS_1.0 shows that it is possible to do much better.
Besley, J.C. The state of public opinion research on attitudes and understanding of science and technology. Bulletin of Science, Technology & Society, 0270467613496723 (2013).
Blalock, C.L., Lichtenstein, M.J., Owen, S., Pruski, L., Marshall, C. & Toepperwein, M. In Pursuit of Validity: A comprehensive review of science attitude instruments 1935–2005. International Journal of Science Education 30, 961-977 (2008).
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, N.J.: L. Erlbaum Associates.
Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).
Loewenstein, G. The psychology of curiosity: A review and reinterpretation. Psychological Bulletin 116, 75 (1994).
National Science Foundation. Science and Engineering Indicators, 2010 (National Science Foundation, Arlington, Va., 2014).
Osborne, J., Simon, S. & Collins, S. Attitudes towards science: A review of the literature and its implications. International journal of science education 25, 1049-1079 (2003).
Thomas G Reio Jr, Joseph M Petrosko, Albert K Wiswell & Juthamas Thongsukmag, The Measurement and Conceptualization of Curiosity, 167 The Journal of Genetic Psychology 117-135 (2006).
Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).
Are Americans who “disbelieve in” human evolution as likely as those who “believe in” it to be interested in a science documentary on our species’ natural history? Would they accept the evidence in such a documentary as valid and convincing?
“No” and “no” would seem to be the obvious answers. It’s not as if those who reject human evolution just haven’t been shown the proof yet. However skillfully presented, then, another exposition of evolutionary science, one might think, would be more likely to antagonize them than to pique their interest.
The study involved a nationally representative sample of 2500 U.S. adults. In line with national survey findings that haven't changed for decades (Newport 2014), about 40% of the subjects selected “false” in response to the survey item “Human beings evolved from an earlier species of animal.”
Study subjects were instructed to view as much or as little as they chose of a 10-minute science documentary segment. The segment was excerpted from Your Inner Fish, an award-winning documentary on evolution that was produced by ESFI collaborator Tangled Bank Studios and that was broadcast on PBS in 2014. The excerpt in question examined the origins of color vision in humans.
The study also measured subjects’ science curiosity and science comprehension. Both of these dispositions were positively correlated with subjects’ acceptance of evolution. But the strength of the relationships was quite modest . Among those who “believed” in evolution and among those who did not, there were ample numbers of study subjects high in science comprehension and science curiosity, and ample numbers of people who were high in neither.
Unsurprisingly, those subjects who ranked highest in science curiosity were substantially more engaged by the segment. The more curious subjects were, the more likely they were to watch all or a substantial portion of it; to report finding it interesting; and to supply the information necessary to receive free access to the remainder of the documentary (responses aggregated to form an "Engagement Index").
The intensity of the relationship between curiosity and engagement was no less pronounced, moreover, in subjects who said they did not “believe in” evolution than it was among those who said they did. Low-curiosity evolution “disbelievers” were in fact slightly less engaged than low-curiosity “believers.” But neither of those low-curiosity subgroups was nearly as engaged by the clip as were evolution “nonbelievers” who scored high on the science curiosity scale.
This is evidence, then, that yes, an evolution “nonbeliever” can enjoy an evolution-science documentary—one that uses experiments on monkeys no less to support inferences about the impact of random mutation, natural selection, and genetic variance on modern humans’ perception of color.
How much an evolution “nonbeliever” will enjoy this documentary depends, the study suggests, on exactly the same thing that an evolution “believer’s” level of enjoyment does: how motivated he or she is to seek out and consume information on science for personal satisfaction--or in a word, how curious that person is about science.
Can an evolution “nonbeliever” find the evidence presented in such a documentary both valid and convincing?
The answer to this question is also "yes"—particularly if he or she is generally curious about science.
A low-curiosity evolution “nonbeliever” was about as likely to disagree as he was to agree that the clip was “convincing,” and that it “supplied strong evidence of how humans acquired color vision.” But the probability a high-curiosity “nonbeliever” would agree with these characterizations of the validity of the information in the segment was well over 75%.
Note, though, that the curious “nonbelievers” who indicated that they found the evidence “strong” and “convincing” did not “change their minds” on human evolution.
Is that surprising? It won’t be to anyone familiar with empirical study of the relationship between professions of “belief” in evolution and comprehension of science.
That research consistently finds no correlation between how people respond to “true-false” human-evolution survey items and their ability to give a cogent account of natural selection, genetic variance, and random mutation (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990).
Researchers also find that students who say they don’t believe in evolution can learn these important insights just as readily as those who say they do believe in it—as long as the teacher doesn’t make the mistake of conveying that the point of the instruction is to extract a profession of “belief” from the former, a style of pedagogy that needlessly pits students’ interest in learning against their interest in being faithful to their cultural identities (Lawson & Worsnop 1999).
What people say they “believe” about human evolution doesn’t indicate what people know; it expresses who they are, culturally speaking (Long 2011).
Professing rejection of evolution coheres with a cultural style that features religiosity (Roos 2012). It is precisely because the answer “false” signifies their defining commitments that individuals with this identity balk when educators make the mistake (itself a sign of inattention to empirical research) of conflating transmission of knowledge with extracting professions of “belief” in it.
When put in the position of having to choose between being who they are and expressing what they know, free, reasoning people understandably opt for the former (Hameed 2015). Indeed, they can be expected to dedicate all of their reasoning proficiency to doing so: the higher the science literacy score of someone who subscribes to a religious cultural identity, the more likely he or she is to respond negatively to the “true-false” survey item “human beings evolved from an earlier species of animal” (Kahan 2015).
Our study captured this form of of identity-protective cognition, too.
Again, science curiosity was positively correlated with levels of engagement and with levels of perceived validity for both evolution believers and evolution nonbelievers. But this was not the case for science comprehension: as subjects’ scores on the Ordinary Science Intelligence assessment test (Kahan in press; Kahan, Peters et al. 2012) increased, evolution believers became more engaged and more convinced by the clip, while evolution disbelievers became less so.
This result was driven by the negative reactions of evolution nonbelievers who were simultaneously high in science comprehension and low in science curiosity. These study subjects were by far the least engaged by the clip and the least likely to view the evidence it presented as valid.
Nonbelievers who scored high on both the science curiosity and science comprehension scales, in contrast, were highly engaged by the documentary segment and highly likely to deem it a strong and convincing account of the origins of human color vision.
People use their reason for multiple ends. One of these is to form the dispositions and attitudes that enable them to reliably experience and express their commitment to a shared way of life. Another of these is to attain goals—from personal health to professional success—that can be effectively achieved only with what science knows (Kahan 2015).
People who are curious about science have a goal that those who aren’t curious don’t: to satisfy their appetite to understand the insights generated by use of science’s signature methods of observation, measurement, and inference. EFSI Study 1 shows that such a person can satisfy that goal by enjoying a skillfully made science documentary about evolution even if she has an identity that is itself enabled by professing “disbelief” in it.
In this respect, the results of the study are in line with those that show that individuals who hold a religious identity associated with disbelief in evolution can still learn what science knows about the natural history of human beings and, if they choose, even use that knowledge to engage in activities, such as the practice of medicine or scientific research, that are uniquely enabled by such knowledge (Lawson & Worsnop 1999; Everhart & Hameed 2013).
People who are low in science curiosity can be expected to engage information on it for one purpose only: to be the sorts of persons, culturally speaking, enabled by their respective states of “belief” or “disbelief.” Making use of information for that end is another one of things people can do even better if they possess the sort of reasoning proficiency associated with high science comprehension. Accordingly, individuals who scored high in science comprehension but low in science curiosity (the two dispositions are only weakly correlated) predictably formed attitudes—of “engagement” and “acceptance”—that accurately manifested their cultural identities.
What to make of all this?
Well, for one thing, it is very much worth acknowledging that this interpretation of the data from ESFI Study No. 1, like all interpretations of any data, is provisional. Additional studies, additional evidence might well furnish grounds for revising this understanding.
But it’s also very worth pointing out that the engagement enjoyed by science-curious evolution “nonbelievers,” as well as the experience of edification reflected in their response to the study's “accuracy” items, belies the simple—indeed simplistic—picture of how those who profess any particular “position” on evolution feel about science.
In particular, it is wrong to infer that those who profess nonacceptance necessarily lack either the desire to know or the capacity to experience awe and wonder at the knowledge human beings have acquired through science, including the astonishing insights into their own natural history.
Because science curiosity does not discriminate on the basis of cultural identity, it would be a mistake for anyone who is genuinely committed to communicating science in culturally pluralistic society to adopt a style of discourse that forces curious, reflective people to choose between satisfying their appetite to know what’s known to science and being the sort of person that they are.
Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).
Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).
Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).
Newport, F. In U.S. 42% Believe in Creationist View of Human Origins. Gallup. (June 14, 2014),http://www.gallup.com/poll/170822/believe-creationist-view-human-origins.aspx.
Roos, J.M. Measuring science or religion? A measurement analysis of the National Science Foundation sponsored science literacy scale 2006–2010. Public Understanding of Science (2012).
Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).
As the 14 bilion readers of this blog can attest, when I say I'm going to do something "tomorrow" or "Monday" or "soon" or "June 31"-- I'm not kidding around: I mean "tomorrow" or "Monday" or "soon" or "June 31" or whatever the heck I said.
Here is CCP's new Evidence-based Science Filmmaking Initiative! (aka "Science of science filmmaking" initiative--title soon to be put to a vote on this site)!
I'm not going to say a lot at this point. For one thing, there's plenty of material emanating from the "project page," so you can just poke around yourself all day on your own.
Also there's the Initiative's first "Report." It describes the results of a big preliminary study aimed at investigating the "Missing Audience Hypothesis" (a conjecture that was in fact featured in an earlier blog post and that provoked a pretty interesting discussion).
The study had all kinds of cool things in it, including a "Science Curiosity Scale" (SCS) which was self-consciously designed to remedy (or at least start to remedy) the defects in existing measures. As discussed previously in this blog (as I've mentioned innumerable times, I am loath to repeat myself in my posts, but I'll make an exception here), existing "science curiosity" measures are dominated by ill-formulated self-report items that exihibit lousy psychometric performance and have that never been shown to predict behavior evincing an interest in science.
Our "SCS" index includes some self-report measures (discretely bundled in with numerous other types of items of the sort that one might expect to see if one were participating in consumer-marketing survey), but it combines them them with performance and behavioral ones.
To validate SCS, we--the ESFI science filmmaking professionals and "science of science communication" researchers who collaborated on this study-- assessed its power to predict the level of subject engagement (also behaviorally measured) with a segment of a cool science documentary, Your Inner Fish, produced by ESFI collaborator Tangled Bank Studios.
The study also found some other really really cool things, including how engagement interacted with "belief in" evolution and science comprehension.
But I'll spare you the details.
Why? Because they are summarized in the "project pages," and spelled out in even greater detail in the Report, which of course, you can download!
I'll also say more on various of these matters in subsequent posts, which will supplement the analyses and interpretations in the project pages and Report.
In case you haven't noticed, I'm loath ever to repeat myself in this blog. So I will hold back for now.
And say more "tomorrow."
But by all means, feel free to offer your own views on any of the materials that appear in the Report or the sections of the site dedicated to ESFI, whose members consist of a both accomplished science science communication professionals and and empirical researchers all eager to explore the integration of the science of science communication into the craft of science filmmaking..
Weekend update: the anti- "fact inventory conception of science literacy" movement is gaining ground on Tea Party & Trascism [Trump+Fascism]; to eclipse them, only thing it needs is a catchier name!
A friend pointed me toward this really interesting article:
The bigger issue, however, is whether we ought to call someone who gets those questions right “scientifically literate.” Scientific literacy has little to do with memorizing information and a lot to do with a rational approach to problems....
[T]he interpretation of data requires critical thinking.... Our schools don’t train people to be vigilant about avoiding errors such as confounding correlation and causation, however, nor do they do a good job of rooting out confirmation bias or teaching the basics of statistics and probabilities. All of this leads to the propagation of a lot of nonsense in the press and internet, and it leaves people vulnerable to the flood of “facts.”
It’s not possible for everyone—or anyone—to be sufficiently well trained in science to analyze data from multiple fields and come up with sound, independent interpretations. I spent decades in medical research, but I will never understand particle physics, and I’ve forgotten almost everything I ever learned about inorganic chemistry. It is possible, however, to learn enough about the powers and limitations of the scientific method to intelligently determine which claims made by scientists are likely to be true and which deserve skepticism. . . . Most importantly, if we want future generations to be truly scientifically literate, we should teach our children that science is not a collection of immutable facts but a method for temporarily setting aside some of our ubiquitous human frailties, our biases and irrationality, our longing to confirm our most comforting beliefs, our mental laziness. Facts can be used in the way a drunk uses a lamppost, for support. Science illuminates the universe.
For sure I couldn't have said this better. Anyone can confirm this for him- or herself by reviewing the various posts I've written criticizing the "fact inventory" conception of science literacy and defending an "ordinary science intelligence" alternative that features the types of critical reasoning proficiencies essential to recognizing and making use of valid scientific evidence.
Maybe I'm jumping the gun, but I hope this thoughtful and reflective article is a harbinger of more of the same, and the beginning of a wider discussion of this problem.
If I have any quibble with Teller's argument, though, it is over what the nature of the problem actually is.
Teller starts with the premise that the U.S. public has a poor comprehension of science and attributes this to the "fact inventory" conception of science literacy.
She might be right-- but I'm not sure.
I'm not sure, that is, that the American public's science comprehension is as poor as she assumes it is. The reason I'm not sure is that I don't think we've been assessing the general public's science comprehension with a valid measure of that capacity -- one that features critical reasoning proficiencies rather than a"fact inventory"!
Developing a public science comprehension measure focused on the reasoning proficiencies that Teller conviciningly emphasizes has been one focus of CCP reasearch over the last few years. The progress made so far in that effort is reflected in the current version, "2.0," of the "Ordinary Science Intelligence" assessment test (Kahan in press).
As discussed in previous posts, OSI_2.0 doesn't try to certify respondents' acquisition of any set of canonical "factual" beliefs.
Instead, it uses quantitative and critical reasoning items that are intended to assess a latent or unobserved disposition suited for recognizing and making appropriate use of valid empirical evidence in one's "ordinary," everyday life as a consumer, a participant in today's economy, and as a democratic citizen.
Since at least 1910 (my memory is hazy for events earlier than that), when Dewey published his famous "Science as Subject-Matter and as Method," the idea that science pedagogy should be focused on cultivating the distinctive reasoning proficiencies associated with making valid inferences from reliable observations has exerted a powerful force on the imaginations and motivations of a good number of educators and scholars (today I think of Jon Baron (1993, 2008) as the foremost champion of this view).
One thing they've learned is that imparting this sort of capacity is easier said than done!
But in any event, they are right -- as is Teller -- that this kind of thinking disposition is the proper object of science education.
The much more pedestrian point I find myself making now & again is that we really don't have a good general public measure of this capacity -- and so aren't even in a good position to figure out how well or poorly we are doing in equipping citizens with it.
Necessarily, too, without such a good measure, we won't be as smart as we ought to be about what contribution defects in science comprehension are making, if any, to public controversies over climate change, nuclear power, the HPV vaccine, and other issues that turn on decision-relevant science.
Teller cites the 2012 CCP study that found that higher science literacy is associated with greater polarization, not less, on climate change risks (nuclear power ones too).
I think that study helps to show that this sort of conflict is not plausibly attributed to defects in science comprehension. Precisely b/c I and my collaborators agree with Teller that a "fact inventory" conception of "science literacy" is defective, we used a science comprehension measure-- OSI_1.0-- that combined certain NSF Indicator "basic fact" items with a Numeracy battery, which has been shown to be highly effective in measuring the capacity of ordinary members of the public & others to reason well with quantitative information.
And the same is true of people who score the highest on even reasoning-proficiency centered OSI_2.0:
But the few who actually can reliably identify its causes and consequences (as measured by version 1.0 of the "Ordinary Climate Science Intelligence" test, an assessment based on "climate science literacy" items drawn from NASA, NOAA, and the IPCC) are also the most politically polarized on the question of whether human activity is the principal cause of climate change -- or indeed on whether climate change is happening at all (Kahan 2015a).
That evidence has lead me to conclude that the conflict over climate change (not to mention numerous other disputed issues of science) isn't about what people know. It is about who they are: the "beliefs" people form on these issues are ones suited to helping them form affective orientations toward these issues that effectively signal their membership in & loyalty to groups embroiled in a nasty form of cultural status competition....
That problem isn't being caused by any deficiency in science education in this country.
On the contrary, that problem is preventing our democracy from getting the benefit of whatever scientific knowledge & reasoning capacity we have managed to impart in our citizens.
If we want enlightened democracy, we better figure out how to extricate science from these sorts of ugly, illiberal, reason-eviscerating forms of cultural conflict (Kahan 2015b).
Of course, these are provisional conclusions, informed by what I regard as the best available evidence.
But the best evidence available definitely isn't as good as it should be for exactly the reason that Teller describes so articulately: we don't possess as good a measure of public science comprehension as we ought to have.
The scale development exercise that generated OSI_2.0 is offered as an admittedly modest contribution to an objective of grand dimensions. How ordinary citizens come to know what is collectively known by science is simultaneously a mystery that excites deep scholarly curiosity and a practical problem that motivates urgent attention by those charged with assuring democratic societies make effective use of the collective knowledge at their disposal. An appropriately discerning and focused instrument for measuring individual differences in the cognitive capacities essential to recognizing what is known to science is essential to progress in these convergent inquiries.
The claim made on behalf of OSI_2.0 is not that it fully satisfies this need. It is presented instead to show the large degree of progress that can be made toward creating such an instrument, and the likely advances in insight that can be realized in the interim, if scholars studying risk perception and science communication make adapting and refining admittedly imperfect existing measures, rather than passively employing them as they are, a routine component of their ongoing explorations.
Not as articulate as Teller-- but the best I can do!
And hey-- if my best motivates others who can do a better job still, then I figure I'm doing my part.
Baron, J. Why Teach Thinking?‐An Essay. Applied Psychology 42, 191-214 (1993).
Dewey, J. Science as Subject-matter and as Method. Science 31, 121-127 (1910).
Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res. (in press).
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.