follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Sunday
Feb282016

Weekend update: Disentanglement Principle, A Lesson from SE Fla Climate Political Science Lecture (& slides)

A presentation I gave at a meeting of the Institute for Sustainable Communities, a major partner of the Southeast Florida Regional Climate Compact. Synthesizes research CCP (with generous support from the Skoll Global Threats Foundation) has done to support Compact science communication.

If the Compact members have learned from our research even 10^-2 of what they've taught us about what "climate change" means and what it takes to have the right conversation & banish the wrong conversation about it, then I'll feel we've done something pretty important. Even more important will be to help others learn the lessons of Southeast Florida Climate Political Science . . . . 

Slides here.

Thanks, too, to Diet Coke, official beverage of CCP Lab

Thursday
Feb252016

Science curiosity and identity-protective cognition ... a glimpse at a possible (negative) relationship 

So here is a curious phenomenon: unlike pretty much every other science-related reasoning disposition, science curiosity seems to avoid magnifying identity-protective cognition!

Let's start with a bunch of culturally contested societal risks, ones on which political polarization can be assessed with the ever-handy Industrial Strength Risk Perception Measure:

Now consider:

click it-- to blow your mind! And for closer look

For each risk, the paired panels chart the risk-perception impact of greater science comprehension and greater science curiosity (in each case “controlling for” the influence of the other), respectively.  They estimate those effects separately, moreover, for a "liberal Democrat" and for a "conservative Republican," designations determined by reference to the study subejects' scores on a composite political ideology and party-identification scale.

As science comprehension (measured with the Ordinary Science Intelligence assessment) increases, so too does the degree of polarization on politically contested risks involving climate change, gun possession, fracking, marijuana legalization, pornography, and the like. 

That’s not a surprise.  The warping effect of identity-protective cognition on cognitive reflection, numeracy, science comprehension and all other manner of critical reasoning proficiency has been exhaustively chronicled, and lamented, in this blog.

But that’s not what happens as science curiosity increases.  On the contrary, in all cases, greater science curiosity has the same general risk-perception impact—in some cases enhancing concern, in some blunting it, and in others having no directional effect—for study respondents of politically diverse outlooks.

Science curiosity is being measured for these purposes with the CCP/Annenberg Public Policy Center “Science Curiosity Scale,” or SCS_1.0.  

SCS_1.0 was developed for use in the CCP/APPC “Evidence-based Science Filmmaking Initiative.” Previous posts have discussed the development and properties of this measure, including its ability to predict engagement with science documentaries and other forms of science information among diverse groups.

So has its relationship to random other non-science related activities, such as taking a peek at what goes on at gun shows and even cracking open a book on religion now & again.

But this feature of SCS_1.0—its apparent ability to defy the gravitational pull of identity-protective cognition on perceptions of disputed risks—is something I didn’t anticipate. . . .

Indeed, I really don’t want to give the impression that I “get” this, it makes “perfect sense,” etc. Or even that there’s necessarily a “there” there.

An observation like this is just corroboration of the fundamental law of the conservation of perplexity, which refers to the inevitable tendency of valid empirical research to generate one new profound mystery (at least one!) for every mystery that it helps to make less perplexing (anyone who thinks “mysteries” are ever solved by empirical inquiry has a boring conception of “mystery” or, more likely, a misconception of how empirical research works).

But here are some thoughts:

1. It does in fact make sense to think of curiosity as the cognitive negation of motivated reasoning.  The latter disposition consists in the unconscious impulse to conform evidence to beliefs that serve some goal (like cultural identity protection) unrelated to figuring out the truth about some uncertain factual matter.  Curiosity, in contrast, is an appetite not only to know the truth but to be surprised by it: it consists in a sense of anticipated pleasure in being shown that the world works in a manner that is astonishingly different from what one had thought, and in being able to marvel at the process that made it possible for one to see that. 

 When one is in that state, the sentries of identity-protect are necessarily standing down.  The path is clear for truth to march in and enlighten . . . .

2. At the same time, these data are pretty baffling to me.  No way did I expect to see this.

The affinity between identity-protective cognition and critical reasoning, I’m convinced, reflects the role the former plays in the successful negotiation of social interactions.  Where positions on disputed issues of risk become entangled in social meanings that transform them into badges of membership in and loyalty to opposing cultural groups, it is perfectly rational, at the individual level, for people to adopt styles of information processing that conduce to formation of beliefs that express their tribal allegiances.

Indeed, not to attend to information in this manner would put normal people—one’s whose personal beliefs about climate change or fracking or gun control don’t have any material impact on the risks they or anyone else face—in serious peril of ostracism and ridicule within their communities.

I’d essentially come to the bleak, depressing, spirit-enervating conclusion, then, that the only reasoning disposition likely to blunt the force of identity-protective cognition was a social disability in the nature of autism.  

But now, for the 14 billionth time, I will have to rethink and reconsider. 

Because clearly the appetite to seek out and consume information about the insights human beings have acquired through the use of science’s signature methods of disciplined observation and inference is no reasoning disability.  And those who in who are most impelled to satisfy this appetite are clearly not using what they learn to forge even stronger links between their perceptions of how the world works and the views that express membership in their identity-defining affinity groups.

Or at least that’s one way to understand evidence like this.  Pending more investigation.

3.  All sorts of qualifications are in order.

a. For one thing, SCS_1.0 is a work in progress.  Additional tests to refine and validate it are in the works.

This is the most informative way I think to examine distribution of reasoning proficiencies across subpopulations. It's also connected to an algorithm I've developed to predict movement in the stock market. email me for details.b. For another, science comprehension and science curiosity are not wholly unrelated! Actually, they aren’t strongly related; in the data set from which these observations come, the correlation is about 0.3. But that's not zero!

I actually tested for “interactions” between science comprehension (as measured by OSI_2.0) and science curiosity (as measured by SCS_1.0), and between the two of them and political outlooks.  The interactions were all pretty close to zero; they wouldn’t affect the  basic picture I’ve shown you above (but I am happy to show more pictures—just tell me what you want to see).

Still I don’t think the effect of science curiosity on identity-protective cognition can be made sense of without closer, more fine-grained examination of how much it alters the trajectory of polarization at different levels of science comprehension.

c.  Also, the impact of science curiosity is interesting only because it doesn’t magnify polarization. It doesn’t make it go away, as far as I can tell.  That’s important—for the reasons stated.  But a reasoning disposition that generated convergence among individuals of diverse cultural outlooks on culturally contested risks (as science comprehension does on culturally uncontested ones) would be much more remarkable—and important. 

We should be looking for that.  I’d say, though, that looking even harder at curiosity might help us detect if there is such a reasoning quality—the ”Ludwick factor” is the technical term   for those who’ve speculated on its possible existence . . .—and how it might be disseminated and stimulated. 

For surely, that is a reasoning disposition the cultivation of which should be cultivated in the citizens of the Liberal Republic of Science.

But in the meantime, this unexpected, intriguing relationship can be contemplated by curious people with excitement and perplexity and with a desire to figure out even more about it.

So what do you think?

Monday
Feb222016

Believing as doing . . . evolution & climate change

Will be at Binghmaton University this evening to talk about cognitive dualism & disentanglement pirnciple. Pakinstani Dr., Kentucky Farmer, KristaManny, & Kant will be there, too. . . .

 

 

Friday
Feb192016

Replication indeed

 Where have I seen this before?...

 


 

Oh, right ...

 Click here to see for yourself.

Thursday
Feb182016

America's two "climate changes"

From something I'm working on.  Anyone of the 14 billion regular readers of this blog could fill in the rest. But if you are one of 1.3 billion people who on any given day visit this site for the first time, there's more on the "'Two climate changes' thesis" here & here, among other places. . . .

America’s two “climate changes”

There are two climate changes in America: the one people “believe” or “disbelieve” in in order to express their cultural identities; and the one people ("believers" & "disbelievers" alike) acquire and use  scientific knowledge about  in order to make decisions of consequence, individual and collective.  I will present various forms of empirical evidence—including standardized science literacy tests, lab experiments, and real-world field studies in Southeast Florida—to support the “two climate changes” thesis.  I will also examine what this position implies about the forms of deliberative engagement necessary to rid the science communication environment of the toxic effects of the first climate change and to make it habitable for enlightened democratic engagement with the second.

Wednesday
Feb172016

Do science curious evolution believers and science curious nonbelievers both like to go to the science museum? How about to gun shows?

As the 14 billion readers of this blog know, CCP and Annenberg Public Policy Center have teamed up with Tangled Bank Studios in an ongoing "Evidence based science filmmaking initiative."

I've described highlights from the first study (a more complete report on which can be downloaded here) in some earlier posts.  They include the development of a behaviorally validated "science curiosity" scale (one that itself involves performand and behavioral measures and not just self-reported interest ones), and the successful use of that scale to predict "engagement" --measured behaviorally, and not just with self-reported interest--in the cool Tangled Bank Studios documentary on evolution, Your Inner Fish.

Stay tuned for more reports about our findings in this ongoing project.

But for now, consider these interesting findings about the power of "SCS_1.0," the science curiosity scale we constructed, to predict one or another types of behavior.

The graphic shows, not surprisingly, that those who are more science curious are way more likely to do things like read science books and attend science museums.

Probably not that surprisingly, they might be slightly more likely to do other things, too, like go to an amusement park-- or even a gun show than science uncurious people.  But they really aren't much more likely to do those thngs than the average member of the population.

In addition to estimating the predicted mean probabilities for these activities conditional on science curiosity for the entire sample (a large nationally represenative one), I've also estimated the predicted mean probabilities for individuals who say they "do" and "don't believe in" human evolution:

One of the coolest things we found in ESFI Study No. 1 was that science curious individuals who "disbelieve in" evolution were just as engaged as science curious individuals who do believe in evolution.  In addition, they were both substantially more engaged than their science-noncurious counterparts, most of whom yawned and turned the show off after a couple of minutes, no doubt hoping that the survey would resume its focus on Honey Boo Boo, "Inflate-gate," and other non-science related topics used to winnow out those less interested in science than in other interesting things.

Individuals who "disbelieve" in evolution but who were high in science curiosity also indicated that they found the information in the documentary clip valid and convincing as an account of the origins of human color vision.

Of course, that didn't "change their minds" on evolution.  Their beliefs on that measure who they arenot what they know about science or what more they’d like to know about what human beings have discovered using science's signature methods of disciplined observation and inference.  The experience of watching the cool Your Inner Fish clip satisfied their appetite to know what science knows but it didn't make them into different people!

Indeed, I think it likely succeeded in the former precisely because it didn't evince any interest in accomplishing the latter.  It didn't put science curious people who have an identity associated with disbelief in evolution in the position of having to choose between being who they are and knowing what science knows.

Satisfying this criterion, which I've taken to calling the "disentanglement principle," is, I believe, a key element of successful science communication in pluralistic liberal society (Kahan 2015a, 2015b).

Anyway, check out what evolution believers & disbelievers do in their free time conditional on having the same level of science curiosity.  

Many of the same things -- but not all! 

I have ideas about what this means.  But I'm out of time for today!  So how about you tell me what you make of this?

References

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

 Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, 14, 1-12 (2015b).

 

 

Monday
Feb152016

Plata's Republic: Justice Scalia and the subversive normality of politically motivated reasoning . . . .

From  Kahan, D.M. The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1-77 (2011):

 

. . . Plata's Republic . . .

Civis: It is “fanciful, you say, to think that three district court judges “relied solely on the credibility of the testifying expert witnesses”[1] in finding that release of the prisoners would not harm the community?

Cognoscere Illiber: Yes, because “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.”[2]

Civis: “Of course” judges with “different policy views” would have formed different beliefs about the consequences if they had evaluated the same expert evidence? Why? Surely the judges, like all nonspecialists, would agree that these are matters outside their personal experience. Are you saying the judges would ignore the experts and decide on partisan grounds?

Cognoscere Illiber: No. “I am not saying that the District Judges rendered their factual findings in bad faith. I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” on such matters.[3] The “expert witnesses” here were of the sort trained to make “broad empirical predictions”—like whether “deficit spending will . . . lower the unemployment rate” or “the continued occupation of Iraq will decrease the risk of terrorism.”[4]

Civis: But people normally assert that their policy positions on criminal justice, economic policy, and national security are based on empirical evidence. It almost sounds as if are you saying things are really the other way around—that what they understand the empirical evidence to show is “necessarily based in large part upon policy views.”[5]

Cognoscere Illiber: Exactly what I am saying! Those sorts of “factual findings are policy judgments.”[6] Thus, empirical evidence relating to the consequences of law should be directed to “legislators and executive officials”—not “the Third Branch”[7]—since in a democracy it is the people’s “policy preferences,” not ours, that should be “dress[ed] [up] as factual findings.”[8]

Civis: Ah. Thanks for telling me—I had been naively taking all the empirical arguments in politics at face value. Silly me! Now I see, too, that those naughty judges were just trying to exploit my gullibility about policy empiricism. Shame on them!


[1] Plata, 131 S. Ct. at 1954 (Scalia, J., dissenting).

[2] Id.

[3] Id.

[4] Id. at 1954-55.

[5] Id. at 1954.

[6] Id.

[7] Id.

[8] Id. at 1955.

 

*  *  *

Brown v. Plata was among the most consequential decisions of the 2010 Term—in multiple senses. In Plata, California attacked an order, issued by a three-judge federal district court, directing the state to release more than 40,000 inmates from its prisons. It was not disputed that California prisons had for over a decade been made to store double their intended capacity of 80,000 inmates. The stifling density of the population inside—“200 prisoners . . . liv[ing] in a gymnasium,” sleeping in shifts and “monitored by two or three guards”; “54 prisoners . . . shar[ing] a single toilet”; “50 sick inmates . . . held together in a 12- by 20-foot” cell; “suicidal inmates . . . held for prolonged periods in telephone-booth sized cages” ankle deep in their own wastes—was amply documented (with photographs, appended to the Court’s opinion, among other things). The awful effect on the prisoners’ mental and physical health was indisputable, too (“it is an uncontested fact that, on average, an inmate in one of California’s prisons needlessly dies every six to seven days”). These conditions, the district court concluded, violated the Eighth Amendment. The district court also saw that there was no prospect whatsoever that the state, having repeatedly rejected prison-expansion proposals and now in a budget crisis, would undertake the massive expenditures necessary to increase prison capacity and staffing. Accordingly, it ordered the only relief that, to it, seemed, possible: the release of the number of inmates that the court deemed sufficient to bring the prison’s into compliance with minimally acceptable constitutional standards.

The Supreme Court, in a five to four decision, affirmed. The major issue of contention between the majority and dissenting Justices was what consequence the ordered prisoner release would have on the public safety, a consideration to which the district court was obliged to give “substantial weight’” by the Prison Litigation Reform Act of 1995. The district court devoted 10 days of the 14-day trial to receiving evidence on this issue, and concluded that use of careful screening protocols would permit the state to release the necessary number of inmates “in a manner that preserves public safety and the operation of the criminal justice system.”

The determinations underlying this finding, Justice Kennedy noted in his majority opinion, “are difficult and sensitive, but they are factual questions and should be treated as such.” The district court had “rel[ied] on relevant and informed expert testimony” by criminologists and prison officials, who based their opinion on “empirical evidence and extensive experience in the field of prison administration.” Indeed, some of that evidence, Justice Kennedy observed, had “indicated that reducing overcrowding in California’s prisons could even improve public safety” by abating prison conditions associated with recidivism. Like its other findings of fact, the district court’s determination that the State could fashion a reasonably safe release plan was not “clearly erroneous.”

The idea that the district court’s public safely determination was a finding of “fact” entitled to deferential review caused Justice Scalia to suffer an uncharacteristic loss of composure. Deference is due factfinders because they make “determination[s] of past or present facts” based on evidence such as live eyewitness testimony, the quality of which they are “in a better position to evaluate” than are appellate judges confined to a “cold record,” he explained. The public-safety finding of the three-judge district court, in contrast, consisted of “broad empirical predictions necessarily based in large part upon policy views.” “The idea that the three District Judges in this case relied solely on the credibility of the testifying expert witnesses is fanciful,” Scalia thundered.

Justice Scalia’s reaction to the majority’s reasoning in Plata is reminiscent of Wechsler’s to the Court’s in Brown. Like Scalia, Wechsler had questioned whether the finding in question—that segregated schools “retard the[] educational and mental development” of African American children—could bear the decisional weight that the Court was putting on it. But whereas Wechsler had only implied that the Court was hiding its moral-judgment light under an empirical basket—“I find it hard to think the judgment really turned upon the facts [of the case]”—Scalia was unwilling to bury his policymaking accusation in a rhetorical question. “Of course they [the members of the three-judge district court] were relying largely on their own beliefs about penology and recidivism” when they found that release was consistent with—indeed, might even enhance—public safety, Scalia intoned. “And of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “[I]t is impossible for judges to make ‘factual findings’ without inserting their own policy judgments, when the factual findings are policy judgments.”

Justice Scalia’s dissent is also akin to the reaction to “empirical factfinding” in the Supreme Court’s abortion jurisprudence. Justice Blackmun’s majority opinion in Roe v. Wade cited “medical data” supplied by “various amici” to demonstrate that “[m]odern medical techniques” had dissolved the state’s historic interest in protecting women’s health. “[T]he now-established medical fact . . . that until the end of the first trimester mortality in abortion may be less than mortality in normal childbirth” supported recognition of an unqualified right to abortion in that period. Ely, among others, challenged the Court’s empirics: “This [the medical safety of abortions relative to childbirth] is not in fact agreed to by all doctors—the data are of course severely limited—and the Court's view of the matter is plainly not the only one that is ‘rational’ under the usual standards.” In any case, “it has become commonplace for a drug or food additive to be universally regarded as harmless on one day and to be condemned as perilous on the next”—so how could “present consensus” among medical experts plausibly ground a durable constitutional right?

It can’t. “[T]ime has overtaken some of Roe’s factual assumptions,” the Court noted in Planned Parenthood of Southeastern Pennsylvania v. Casey. [A[dvances in maternal health care allow for abortions safe to the mother later in pregnancy than was true in 1973, and advances in neonatal care have advanced viability to a point somewhat earlier.” Accordingly, culturally fueled enactments of and challenges to abortion laws continue—repeatedly confronting the Justices with new empirical questions to which their answers are denounced as motivated by “personal values.” * * *

The only citizens who are likely to see the Court’s decision as more authoritative and legitimate when it resorts to empirical fact-finding in culturally charged cases are the ones whose cultural values are affirmed by the outcome. * * *

This factionalized environment incubates collective cynicism—about both the political neutrality of courts and about the motivations behind empirical arguments in policy discourse generally. Indeed, Justice Scalia’s extraordinary dissent in Plata synthesizes these two forms of skepticism.

It was “fanciful,Scalia asserted, to think that the three district court judges “relied solely on the credibility of the testifying expert witnesses.” One might, at first glance, see him as merely rehearsing his standard diatribe against “judicial activism.” But this is actually a conclusion that Scalia deduces from premises—ones that don’t enter into his standard harangue—about the nature of empirical evidence and policymaking. The experts’ testimony, he explains, dealt with “broad empirical predictions”—ones akin to whether “deficit spending will . . . lower the unemployment rate,” or whether “the continued occupation of Iraq will decrease the risk of terrorism.” For Scalia, the beliefs one forms on the basis of that sort of evidence are “inevitably . . . based in large part upon policy views.” It follows that “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “I am not saying,” Justice Scalia stresses, “that the District Judges rendered their factual findings in bad faith.” “I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” when assessing empirical evidence relating to the consequences of governmental action. , when the factual findings are policy judgments.”

In effect, Scalia is telling us to wise up, not to be snookered by the Court. Sure, people claim that their “policy positions” on matters such as crime control, fiscal policy, and national security are based on empirical evidence. But we all know that things are in fact the other way around: what one makes of empirical evidence is “inevitably” and “necessarily based . . . upon policy views.” At one point, Scalia describes the district court judges as having “dress[ed]-up” their “policy judgments” as “factual findings.” But those judges weren’t, in his mind, doing anything different from what anyone “inevitably” does when making “broad empirical predictions”: those sorts of “factual findings are policy judgments.” Empirical evidence on the consequences of public policy should be directed to “legislators and executive officials” rather than “the Third Branch,” Scalia insists. The reason, though, isn’t that the former are better situated to draw reliable inferences from the best available data. On the contrary, it is that it is a conceit to think that reliable inferences can possibly be drawn from empirical evidence on policy consequences—and so “of course” it is the “policy preferences” of the majority, rather than those of unelected judges, that should control.

It is hard to say what is more extraordinary: the substance of Scalia’s position or the knowing tone with which he invites us to credit it. One might think it would be shocking to see a Justice of the Supreme Court so brazenly deny the intention (capacity even) of democratically accountable officials to make rational use of science to promote the common good. But Scalia could not expect his logic to persuade unless he anticipated that readers would readily concur (“of course”) that empirical arguments in policy debate are a kind of charade.

Scalia, of course, had good reason to expect such assent. His argument reflects the perspective of someone inside the cogntively illiberal state—who senses that motivated reasoning is shaping everyone else’s perceptions, and who accepts that it must also be shaping his, even if at any given moment he is unaware of its influence. We have all experienced this frame of mind. The critical question, though, is whether we really believe that what we are experiencing when we feel this way is inevitable and normal—a style of collective engagement with empirical evidence that should in fact be treated as normative, as Scalia asserts, for the performance of our institutions. I don’t think that we do . . . .

Sunday
Feb142016

Weekend update: I want this book -- right now! ...

Friday
Feb122016

Will people who are culturally predisposed to reject human-caused climate change *believe* "97% consensus" social marketing campaign messages? Nope.

I’ve done a couple of posts recently on the latest CCP/APPC study on climate-science literacy. 

The goal of the study was to contribute to development of a successor to “OCSI_1.0,” the “Ordinary Climate Science Intelligence” assessment (Kahan 2015). Like OCSI_1.0, OCSI_2.0 is intended to disentangle what ordinary members of the public “know” about climate science from their identity-expressive cultural predispositions, which is what items relating to “belief” in human-caused climate change measure.

In previous posts, I shared data, first, on the relationship between perceptions of scientific consensus, partisanship, and science comprehension; and second on the specific beliefs that members of the public, regardless of partisanship, hold about what climate scientists have established.

click me to see public consensus on what climate scientists believe: viz., we're in deep shitWell, another thing we did was see how individuals with opposing cultural predispositions toward climate change react when “messaged” on “97% scientific consensus.”

As pointed out in the last post, people with opposing cultural outlooks overwhelmingly agree about what “climate scientists think” on numerous specific propositions relating to the causes and consequences of human-caused climate change. 

E.g., ordinary Americans—“liberal” and “conservative”—overwhelmingly agree that “climate scientists” have concluded that “human-caused global warming will result in flooding of many coastal regions.” True enough.

But they also agree, overwhelmingly, that climate scientists have concluded that “the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of skin cancer in human beings” and stifle “photosynthesis by plants.” Um, no.

These responses suggest that ordinary members of the public (again, regardless of their political orientation and regardless of whether they “believe” in climate change) get the basic gist of the weight of the evidence on human-caused global warming—viz., that our situation is dire—but have a pretty weak grasp of the details.

These items are patterned on science-literacy ones used to unconfound knowledge of evolutionary science from the identity-expressive answers people give to survey items on “belief” in human evolution. By attributing propositions to “climate scientists,” these questions don’t connote the sort of personal assent or Here's how to unconfound identity & knowledge in measuring evolutionagreement implied by “climate change belief items.”

Such questions thus avoid forcing respondents to choose between revealing what they “know” and expressing “who they are” as members of cultural groups whose identity is associated with pro- or con- attitudes toward assertions that human-caused climate change is putting society at risk. 

The question “is there scientific consensus on climate change,” in contrast, doesn’t avoid forcing respondents to choose between revealing what they know and expressing who they are.

Whatever their more particular group affinities, Americans are overwhelmingly pro-science.

Accordingly, being perceived to hold beliefs at odds with the best available scientific evidence marks one out as an idiot. A familiar idiom in the discourse of contempt, the accusation that one’s cultural group (definite in terms of political outlooks, religiosity, etc.) is “anti-science” is a profound insult.

Thus, for someone who holds a cultural identity expressed by climate skepticism, a survey item equivalent to “true or false—there’s expert scientific consensus that human beings are causing global warming” is tantamount to the statement “well, you and everyone you respect are genuine morons—isn’t that so?”

People with that identity predictably answer no, there isn’t scientific consensus on global warming—because that question, unlike more particular ones relating to what “climate scientists believe,” measures who they are, not what they know (or think they know) about science’s understanding of the impact of human activity on climate change. 

Messaging "scientific consensus" actually reinforces the partisan branding of positions on climate change, and thus frustrates efforts to promote public engagement with the best available evidence on how climate change is threatening their well-being.

Or that’s how I understood the best available evidence before conducting this study. 

But maybe I’m wrong.  If I am, I’d want to know that; and I’d want others to know it, too, particularly insofar as I’ve made my findings in the past known and have reason to think that people making practical decisions—important ones—might well be relying on them.

So in addition to collecting data on what people “believe” about human-caused global warming and on what they perceive climate scientists to believe, we showed study subjects (members of a large, nationally representative sample) an example of the kind materials featured in “97% consensus” social-marketing campaigns.

Specifically, we showed them this graphic, which was prepared for the AAAS by researchers who advised them that disseminating it would help to “increase acceptance of human caused climate change.”

We then simply asked those who had been shown the AAAS message “do you believe the statement '97% of climate scientists have concluded that human activity is causing global climate change' ”?

Overall, only 55% of the subjects said “yes.” 

That would be a great showing for a candidate in the New Hampshire presidential primary.  But my guess is that AAAS, the nation’s premier membership association for scientists, would not be very happy to learn that 45% of those who were told what the organization has to say about the weight of scientific opinion on one of the most consequential science issues of our day indicated that they thought AAAS wasn't giving them the straight facts.

What’s more, we know that the percentage of people who already believe in human-caused climate change is about 55%, and that the issue is one characterized by extreme political polarization.

So it's pretty obvious that if one is genuinely trying to gauge the potential effectiveness of this “messaging strategy,” one should assess what impact it will have on people whose political outlooks predispose them not to believe in human-caused climate change.

Here’s the answer:

Basically, the more conservative a person is, the less likely that individual is to believe the AAAS's magical "science communication" pie chart.

Unsurprisingly, this resistance to accepting the AAAS “message” is most intense among white male conservatives, the group in which denial of climate change is strongest (McCright & Dunlap 2012).

Or really just to make things simple, the only people inclined to believe the science communication being "socially marketed" in this way are those who are already inclined to believe (and almost certainly already do believe) in human-caused climate change.

Could this really be a surprise? By now, nearly a decade after the first $300 million "consensus" marketing campaign, those who reject climate change are surely very experienced at discounting the credibility of those who are "marketing" this "message."

Now, remember, these are the same respondents who, regardless of their political outlooks, overwhelmingly agree with propositions attributing to “climate scientists” all manner of dire prediction, true or false, about the impact of human-caused climate change.

There's a straightforward explanation for these opposing reactions.

People understand agreeing with fine-grained, particular test items to convey their familiarity with what climate scientists are saying.

Huh. Why doesn't this "message" bring skeptics around? I don't get it!They understand accepting “97% consensus messaging” as assenting to the charge that they and others who share their cultural identity are cretins, morons—socially incompetent actors worthy of ridicule.

Far from promoting acceptance of scientific consensus by persons with this identity, the contempt exuded by this form of "messaging" reinforces the resonances that make climate skepticism such a potent symbol of commitment to their group.

It’s patently ridiculous to think that “97% messaging” will change the minds of rather than antagonize these individuals, who make up the bulk of the climate-skeptical population.

Indeed, the probability that a conservative Republican who rejects human-caused climate change will believe the AAAS message is lower than the probability that he or she will already believe  that there’s scientific consensus on climate change. 

So even in the unlikely (very unlikely!) event that such a person credited the AAAS statement, the chance that he or she will profess “belief in” human-caused global warming is even less likely.

This “message” was one designed by social marketers who produced research that they characterize as showing that 97% consensus messaging “increased belief in climate change” in a U.S. general population sample.

Except that’s not what the researchers’ studies found.  The "97% message" increased  study subjects' estimates of the precise numerical percentage of climate scientists who subscribe to the consensus position. But the researchers did not find an increase in the proportion of study subjects who said they themselves "believe" human activity is causing climate change.

Empirical research is indeed essential to promoting constructive public engagement with scientific consensus on climate change. 

But studies can do that only if researchers report all of their findings, and describe their results in a straightforward and non-misleading way.

When, in contrast, science communication researchers treat their own studies as a form of “messaging,” they only mislead and confuse people who need their help.

References


McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, 1-16 (2012).


 

 

Thursday
Feb112016

C'mon down! Let's talk about culture, rationality & the tragedy of the #scicomm commons today at Mizzou

 

If you can't make it, this will probably give you a decent idea of what I'm thinking of saying.

 

Tuesday
Feb092016

"They already got the memo" part 2: More data on the *public consensus* on what "climate scientists think" about human-caused global warming

So as I said, CCP/Annenberg PPC has just conducted a humongous study on climate science literacy.

Yesterday I shared some data on the extent to which ordinary members of the public are politically polarized both on human-caused global warming and on the nature of scientific consensus relating to the same.

I said I was surprised b/c  there was less division over whether “expert climate scientists” agree that human behavior is causing the earth’s temperature to rise. 

Because Americans-- particularly those who display the greatest proficiency in science comprehension-- are less likely to disagree on whether there's scientific consensus than on whether human beings are causing global warming, it's not very compelling to think confusion about the former proposition is the "cause" of the latter.

But there is still a huge amount of polarization on whether there is scientific consensus on human-caused climate change.

Answers to these two questions -- are humans causing climate change? do scientists believe that? -- are still most plausibly viewed as being caused by a single, unobserved or latent disposition: namvely, a general pro- or con- affective orientation toward "climate change" that reflects the social meaning positions on this issue has within a person's identity-defining affinity groups.

Or in other words, the questions "is human climate change happening" and "is there scientific consensus on human-caused climate change" both measure who a person is, politically speaking.

That's a different thing from what members of the public know about climate science. To measure that requires a valid climate-science comprehension instrument.

The study in which we collected these data was a follow up of an earlier CCP-APPC one that featured the “Ordinary Climate Science Intelligence” assessment, or OCSI_1.0. 

The goal of OCSI_1.0 was to disentangle the measurement of “who people are”—the responses toward climate change that evince the affective stance toward climate change characteristic of their cultural group—from “what they know” about climate science. 

OCSI_1.0 met that goal. 

The current study is part of the effort to develop OSI_2.0, the aim of which is to discern differences across a larger portion of the range of knowledge levels within the general population.

Here is how 600 subjects U.S. adults drawn from a nationally representative panel) responded to some of the OSI_2.0 candidate items. 

For me, these are the key points:

First, there’s barely any partisan disagreement over what climate scientists believe about the specific causes and consequences of human-caused climate change.

Sure, there’s some daylight between the response of the left-leaning and right-leaning respondents. But the differences are trivial compared to the ones in these same respondents’ beliefs about both the existence of climate change and the nature of scientific consensus.

There is “bipartisan” public consensus in perceptions of what climate scientists “know,” with minor differences only in the intensity with which respondents of opposing outlooks hold those particular impressions.

Second, ordinary members of the public, regardless of what they "believe" about human-caused climate change, know pitifully little about the basic causes and consequences of global warming.

Yes, a substantial majority of respondents, of diverse political views, know that climate scientists understand fossil-fuel CO2 emissions to be warming the planet, and that climate scientists expect rising temperatures to result in flooding in many regions.

But they also mistakenly believe that, “according to climate scientists, the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of leukemia” and “skin cancer in human beings, and “reduce photosynthesis by plants.”

They think, incorrectly, that climate scientists have determined that “a warmer climate over the next few decades will increase water evaporation, which will lead to an overall decrease of global sea levels.”

 “Republican” and “Democrat” alike also mistakenly attribute to “climate scientists” the proposition that “human-caused global warming has increased the number of tornadoes in recent decades,” a claim that Bill Nye “science guy” believes but that actual climate scientists don’t, and in fact regularly criticize advocates for leaping up to assert every time a tornado kills dozens of people in one of the plains states.

Third, the overwhelming majority of ordinary citizens, regardless of their political persuasions, agree that climate scientists have concluded that global warming is putting human beings in grave danger.

The candidate OSI_2.0 items (only a portion of which are featured here) form two scales.

When one counts up the number of correct responses, OSI_2.0 measures how much people genuinely know about the basic causes and consequences of human-caused global warming.  You can figure out that by scoring.

Alternatively, when one counts up the number of responses, correct or incorrect, that evince a perception of the risks that human-caused climate change poses, OSI_2.0 measures how dreaded climate change is as a societal risk.

No matter what they “believe” about human-caused climate change, very few people do well on the first, knowledge-based scale.

And no matter what they “believe” about human-caused climate change, the vast majority of them  score extremely high on the second, dreadedness scale.

None of this should come as a surpriseThis is exactly the state of affairs revealed by OSI_1.0.

Now in fact, one might think that it’s perfectly fine that ordinary citizens score higher on the “climate change dredadedness” scale than they do on the “climate change science comprehension” one.  Ordinary citizens only need to know the essential gist of what climate scientists are telling them--that global warming poses serious risks that threaten things of value to them, including the health and prosperity of themselves and others; it’s those who ordinary citizens charge with crafting effective solutions (ones consistent with the democratic aggregation of diverse citizens' values) who have to get all the details straight.

The problem though is that democratic political discourse over climate change (in most but not all places) doesn’t measure either what ordinary people know or what they feel about climate change.

It measures what the item on “belief in” climate change does: who they are, whose side they are on, in an ugly, pointless, cultural status competition being orchestrated by professional conflict entrepreneurs.

The “science communication problem” for climate change is how to steer the national discussion away from the myriad actors-- all of them--whose style of discourse creates these antagonistic social meanings.

“97% consensus” social marketing campaigns (studies with only partially and misleadingly reported results notwithstanding)  aren’t telling ordinary Americans on either side of the “climate change debate” anything they haven't already heard & indeed accepted: that climate scientists believe human-caused global warming is putting them in a position of extreme peril.

All the "social marketing" of "scientific consensus" does is augment the toxic idioms of contempt that are poisoning our science communication environment. 

The unmistkable social meaning of the material featuring this "message" (not to mention the cultural conflict bottom-feeders who make a living "debating" this issue on talk shows) is that "you and people who share your identity are morons." It's not "science communication"; it's a clownish bumper sticker that says, "fuck you."

It is precisely because of the assaultive, culturally partisan resonances that this "message" conveys that people respond to the question "is there scientific consensus on global warming?" by expressing who they are  rather than what they know about climate change risks. 

More on that “tomorrow.”

 

Monday
Feb082016

As their science comprehension increases, do members of the public (a) become more likely to recognize scientific consensus exists on human-caused climate change; (b) become more politically polarized on whether human-caused climate change is happening; or (c) both?!

Holy mackeral! Pr(this)/Pr(virgin mary on french toast)=10^-3So CCP and the Annenberg Public Policy Center just conducted a humongous and humongously cool study on climate science literacy. There’s shitloads of cool stuff in the data!

The study is a follow up to an earlier CCP/APPC study, which investigated whether it is possible to disentangle what people know about climate science from who they are

“Beliefs” about human-caused global warming are an expression of the latter, and are in fact wholly unconnected to the former.  People who say they “don’t believe” in human-caused climate change are as likely (which is to say, extremely likely) to know that human-generated CO2 warms the earth’s atmosphere as are those who say they do “believe in” human-caused climate change.

They are also both as likely-- which is to say again, extremely likely--to harbor comically absurd misunderstandings of climate science: e.g.,  that human generated  CO2 emissions stifles photosynthesis in plants, and that human-caused global warming is expected to cause epidemics of skin cancer.

In other words, no matter what they say they “believe” about climate change, most Americans don’t really know anything about the rudiments of climate science.  They just know -- pretty much every last one of them--that climate scientists believe we are screwed.

Click me to see what Rs & Ds think climate scientists think!The small fraction of those who do know a lot—who can consistently identify what the best available evidence suggests about the causes and consequences of human-caused climate change—are also the most polarized in their professed “beliefs” about climate change.

Interesting.

The central goal of this study was to see what “belief in scientific consensus” measures—to see how it relates to both knowledge of climate science and cultural identity.

I’ll get to what we learned about that "tomorrow."

But today I want to show everybody something else that surprised the bejeebers out of me.

Usually when I & my collaborators do a study, we try to pit two plausible but mutually inconsistent hypotheses against each other. I might expect one to be more likely than the other, but I don’t expect anyone including myself to be really “surprised” by the study outcome, no matter what it is. 

Many more things are plausible than are true, and in my view, extricating the latter from the sea of the former—lest we drown in a sea of “just so” stories—is the primary mission of empirical studies.

But still, now and then I get whapped in the face by something I really didn’t see coming!

This finding is like that.

But to set it up, here's a related finding that's  interesting but not totally shocking.

It’s that the association between identity and perceptions of scientific consensus on climate change, while plenty strong, is not as strong as the association between identity and “beliefs” in human-caused climate change.

This means that  “left-leaning” individuals—the ones predisposed to believe in human-caused climate change—are more likely to believe in human caused climate change than to believe there is scientific consensus, while the right-leaning ones—the ones who are predisposed to be skeptical—are more likely to believe that there is scientific consensus that humans are causing climate change than to actually “believe in” it themselves.

Interesting, but still not mind-blowing.

Here’s the truly shocking part:

Got that?

First, as science comprehension goes up, people become more polarized on climate change.

Still not surprising; that’s old, old, old,  old news.

But second, as science comprehension goes up, so does the perception that there is scientific consensus on climate change—no matter what people’s political outlooks are!

Accordingly, as relatively “right-leaning” individuals become progressively more proficient in making sense of scientific information (a facility reflected in their scores on the Ordinary Science Intelligence assessment, which puts a heavy emphasis on critical reasoning skills), they become simultaneously more likely to believe there is “scientific consensus” on human-caused climate change but less likely to “believe” in it themselves! 

Whoa!!! What gives??

I dunno.

One thing that is clear from these data is that it’s ridiculous to claim that “unfamiliarity” with scientific consensus on climate change “causes” non-acceptance of human-caused global warming.

But that shouldn’t surprise anyone. The idea that public conflict over climate change persists because, even after years and years of “consensus messaging” (including a $300 million social-marketing campaign by Al Gore’s “Alliance for Climate Protection”), ordinary Americans still just “haven’t heard” yet that an overwhelming majority climate scientists believe in AGW  is patently absurd. 

(Are you under the impression that there are studies showing that telling someone who doesn't believe in climate change that “97% of scientists accept AGW” will cause him or her to change positions?  No study has ever found that, at least with a US general public sample.  All the studies in question show -- once the mystifying cloud of meaningless path models & 0-100 "certaintly level" measures has been dispelled-- is that immediately after being told that “97% of climate scientists believe in human-caused climate change,” study subjects will compliantly spit back a higher estimate of the percentage of climate scientists who accept AGW.  You wouldn't know it from reading the published papers, but the experiments actually didn’t find that the “message” changed the proportion of subjects who said they “believe in" human caused climate change....)

These new data, though, show that acceptance of “scientific consensus” in fact has a weaker relationship to beliefs in climate change in right-leaning members of the public than it does in left-leaning ones. 

That I just didn’t see coming.

I can come up w/ various “explanations,” but really, I don’t know what to make of this! 

Actually, in any good study the ratio of “weird new puzzles created” to “existing puzzles (provisionally) solved” is always about 5:1. 

That’s great, because it would be really boring to run out of things to puzzle over.

And it should go without saying that learning the truth and conveying it (all of it) accurately are the only way to enable free, reasoning people to use science to improve their lives.

Friday
Feb052016

Is the controversy over climate change a "science communication problem?" Jon Miller's views & mine too (postcard from NAS session on "science literacy & public attitudes toward science")

Gave presentation yesterday before the National Academy of Sciences Committee that is examining the relationship between "public science literacy" & "public attitudes toward science.'  It's really great that NAS is looking at these questions & they've assembled a real '27 Yankees quality lineup of experts to do it.

Really cool thing was that Jon Miller spoke before me & gave a masterful account of the rationale and historical development of the "civic science literacy" philosophy that has animated the NSF Indicators battery.

There was zero disagreement among the presenters-- me & Miller, plus  Philip Kitcher, who advanced an inspiring Dewian conception of science literacy  -- that the public controversy over climate science is not grounded in a deficit in public science comprehension.

It's true that the public doesn't know very much (to put it mildly) about the rudiments of climate science. But that's true of those on both sides, and true too in all the myriad areas in which there isn't any controversy over important forms of decision-relevant science in which there is no controversy and in which the vast majority of ordinary citizens nevertheless recognize and make effective use of the best available evidence.

Strikingly, Miller stated "the climate change controversy is not a 'science communication' problem; it's a political problem."

I think I agree but would put matters differently.  

Miller was arguing that the source of enduring conflict is not a result of the failure of scientists or anyone else to communicate the underlying information clearly but a result of the success of political actors in attaching identity-defining meanings to competing positions, thereby creating social & psychological dynamics that predictably motivate ordinary citizens to fit their beliefs to those that predominate within their political groups.

That's the right explanation, I'd say, but for me this state of affairs is still a science communication problem.  Indeed, the entanglement of facts that admit of scientific inquiry & antagonistic social meanings --ones that turn positions on them into badges of group membership & identity-- is the "science communication problem" for liberal democratic societies.  Those meanings, I've argued, are a form of "science communication environment pollution," the effective avoidance and remediation of which is one of the central objects of the "science of science communication."

I think the only thing at stake in this "disagreement" is how broadly to conceive of "science communication." Miller, understandably, was using the term to describe a discrete form of transaction in which a speaker imparts information about science to a message recipient; I have in mind the less familiar notion of "science communication" as the sum total of processes, many of which involve the tract orienting influence of social norms, that serve to align individual decisionmaking with the best available evidence, the volume of which exceeds the capacity of ordinary individuals to even articulate much less deeply comprehend. 

But that doesn't mean it exceeds their capacity to use that evidence, & in a rational way by effectively exercising appropriately calibrated faculties of recognition that help them to discern who knows what about what.  It's that capacity that is disrupted, degraded, rendered unreliable, by the science-communication environment pollution of antagonistic social meanings.

I doubt Miller would disagree with this.  But I wish we'd had even more time so that I could have put the matter to him this way to see what he'd say! Kitcher too, since in fact the relationship of public science comprehension to democracy is the focus of much of his writing.

Maybe I can entice one or the other or both into doing a guest blog, although in fact the 14 billion member audience for this blog might be slightly smaller than the ones they are used to addressing on a daily basis. 

 


 

 

Thursday
Jan282016

CCP/Annenberg PPC Science of Science Communication Lab, Session 2: Measuring relative curiosity

During my stay here at APPC, we'll be having weekly "science of science communication lab" meetings to discuss our ongoing research projects.  I've decided to post a highlight or two from each meeting.

We just had the 2nd, which means I'm one behind.  I'll post the "session 1 highlight" "tomorrow."

One of the major projects for the spring is "Study 2" in the CCP/APPC Evidence-based Science Filmmaking Initiative.  For this session, we hosted two of our key science filmmaker collaborators, Katie Carpenter & Laura Helft, who helped us reflect on the design of the study.

One thing that came up during the session was the distribution of “science curiosity” in the general population.

The development of a reliable and valid measure of science curiosity—the “Science Curiosity Scale” (SCS_1.0)—was one of the principal objectives of Study 1.  As discussed previously, SCS worked great, not only displaying very healthy psychometric properties but also predicting with an admirable degree of accuracy engagement with a clip from Your Inner Fish, ESFI collaborator Tangled Bank Studio’s award-winning film on evolution.

Indeed, one of the coolest findings was that individuals who were comparably high in science curiosity (as measured by SCS) were comparably engaged by the clip (as measured by view time, request for the full documentary, and other indicators) irrespective of whether they said they “believed in” evolution.

Evolution disbelievers who were high in science curiosity also reported finding the clip to be an accurate and convincing account of the origins of human color vision.

But it’s natural to wonder: how likely is someone who disbelieves in evolution to be high in science curiosity?

The report addresses the distribution of science curiosity among various population subgroups.  The information is presented in a graphic that displays the mean SCS scores for opposing subgroups (men and women, whites and nonwhites, etc).

Scores on SCS (computed using Item Response Theory) are standardized. That is, the scale has a mean of 0, and units are measured in standard deviations.

The graphic, then, shows that in no case was any subgroup’s mean SCS score higher or lower than 1/4 of a standard deviation from the sample mean on the scale. The Report suggested that this was a reason to treat the differences as so small as to lack any practical importance.

Indeed, the graphic display was consciously selected to help communicate that.  Had the Report merely characterized the scores of subgroups as “significantly different” from one another, it would have risked provoking the Pavlovian form of inferential illiteracy that consists in treating “statistically significant” as in itself supporting a meaningful inference about how the world works, a reaction that is very very hard to deter no matter how hard one tries

By representing the scores of the opposing groups in relation to the scale's standard-deviation units on the y-axis, it was hoped that reflective readers would discern that the differences among the groups were indeed far too small to get worked up over—that all the groups, including the one whose members were above average in science comprehension (as measured by the Ordinary Science Intelligence assessment), had science curiosity scores that differed only trivially from the population mean (“less than 1/4 of a standard deviation--SEE???”).

But as came up at the session, this graphic is pretty lame.

Even most reflective people don’t have good intuitions about the practical import of differences in fractions of standards of a deviation.   Aside from being able to see that there's not even a trace of difference between whites & nonwhites, readers can still see that there are differences in science curiosity levels & still wonder exactly what they mean in practical terms.

So what might work better?

Why—likelihood ratios, of course! Indeed, when Katy Barnhart from APPC spontaneously (and adamantly) insisted that this would be a superior way to graph this data, I was really jazzed!

I’ve written several posts in the last yr or so on how useful likelihood ratios are for characterizing the practical or inferential weight of data.  In the previous posts, I stressed that LRs, unlike “p-values,” convey information on how much more consistent the observed data is with one rather than another competing study hypothesis.

Here LRs can aid practical comprehension by telling us the relative probabilities of observing members of opposing groups at any particular level of SCS.

In the graphics below, the distribution of science curiosity within opposing groups is represented by probability density distributions derived from the means and standard deviations of the groups’ SCS scores. 

As discussed in previous posts, study hypotheses can be represented this way: because any study is subject to measurement error, a study hypothesis can be converted into a probability density distribution of "predicted study outcomes" in which the “mean” is the predicted result and the standard error the one associated with the measurement precision of the study instrument.

If one does this, one can determine the “weight of the evidence” that a study furnishes for one hypothesis relative to another by comparing how likely the observed study result was under each of the the probability-density distributions of “predicted outcomes” associated with the competing hypotheses.

This value—which is simply the relative “heights” of the points on which the observed value falls on the opposing curves—is the logical equivalent of the Bayesian likelihood ratio, or the factor in proportion to which one should update one’s existing assessment of the probability of some hypothesis or proposition.

Here, we can do the same thing.  We know the mean and standard deviations for the SCS scores of opposing groups.  Accordingly, we can determine the relative likelihoods of members of opposing groups attaining any particular SCS score. 

An SCS score that places a person at the 90th percentile is about 1.7x more likely if someone is “above average” in science comprehension (measured by the OSI assessment) than if someone is below average. 

There is a 1.4x greater chance that a person will score at the 90th percentile if that person is male rather than female, and a 1.5x greater chance that the person will do so if he or she has political outlooks to the "left" of center rather than the "right" on a scale that aggreates responses to a 5-point liberal-conservative ideology item and a 7-point party-identification item.

There is a comparable relative probability (1.3x) that a person will score in the 90th percentile of SCS if he or she is below average rather than above average in religiosity (as measured by a composite scale that combines response to items on frequency of prayer, frequency of church attendance, and importance of religion in one’s life).

A 90th-percentile score is about 2x as likely to be achieved by an “evolution believer” than by an “evolution nonbeliever.” 

Accordingly, if we started with two large, equally sized groups of believers and nonbelievers and it just so turned out that there were 100 total from the two groups who had SCS scores in the 90th percentile for the general population, then we’d expect 66 to be evolution believers and 33 of them to be nonbelievers (1 would a Pakistani Dr).

When I put things this way, it should be clear that knowing how much more likely any particular SCS score is for members of one group than members of another doesn’t tell us either how likely any group's members are to attain that score or how likely a person with a particular score is to belong to a any group!

You can figure that out, though, with Bayes’s Theorem. 

If I picked out a person at random from the general population, I'd put the odds at about 11:9 that he or she "believes in" evolution, since about 45% of the population answers "false" when responding to the survey item "Human beings, as we know them, evolved from another species of animal," the evolution-belief item we used.

If you told me the person was in the 90th percentile of SCS, I'd then revise upward my estimate by a factor of 2, putting the odds that he or she believes in evolution at 22:9, or about 70%.

Or if I picked someone out a random from the population, I’d expect the odds to be 9:1 against that person scoring in the 90th percentile or higher. If I learned the individual was above average in science comprehension, I’d adjust my estimate of the odds upwards to 9:1.7 (about 16%); similarly, if learned the individual was below average in science comprehension, I’d adjust my estimate downwards to 15.3:1 (about 6%).

Actually, I’d do something slightly more complicated than this if I wanted to figure out whether the person was in the 90th percentile or above.  In that case, I’d in fact start by calculating not the relative probability of members of the two groups scoring in the 90th percentile but the relative probability of them scoring in the top 10% on SCS, and use that as my likelihood ratio, or the factor by which I update my prior of 9:1. But you get the idea -- give it a try!

So, then, what to say?

I think this way of presenting the data does indeed give more guidance to a reflective person to gauge the relative frequency of science curious individuals across different groups than does simply reporting the mean SCS scores of the group members along with some measure of the precision of the estimated means—whether a “p-value” or a standard error or a 0.95 CI.

It also equips a reflective person to drawn his or her own inferences as to the practical import of such information.

I myself still think the differences in the science curiosity of members of the indicated groups, including those who do and don’t believe in evolution, is not particularly large and definitely not practically meaningful.

But actually, after looking at the data, I do feel that there's a bigger disparity in science curiosity than there should be among citizens who do & don't "believe in" evolution.  A bigger one than there should be among men & women too.  Those differences, even though small, make me anxious that there's something in the environment--the science communication environment--that might well be stifling development of science curiosity across groups.

No one is obliged to experience the wonder and awe of what human beings have been able to learn through science!

But everyone in the Liberal Republic of Science deserves an equal chance to form and satisfy such a disposition in the free exercise of his or her reason.

Obliterating every obstacle that stands in the way of culturally diverse individuals achieving that good is the ultimate aim of the of the project of which ESFI is a part.

Tuesday
Jan262016

Is the HPV vaccine still politically "hot"? You tell me....

Some more data from latest CCP/Annenberg Public Policy Center's latest "science of science communication" study.

I was curious, among other things, about what the current state of political divisions might be on the risk of the HPV vaccine.

At one point—back in 2006-10, I’d say—the efficacy and safety of the vaccine was indeed culturally contested.

The public was polarized; and state legislatures across the nation ended up rejecting the addition of the vaccine to the schedule of mandatory vaccinations for school enrollment, the first (and only) time that has happened (on that scale) for a vaccine that the CDC had added to the schedule of recommended universal childhood immunizations.

I’ve discussed the background at length, including the decisive contribution that foreseeable, avoidable miscues in the advent of the vaccine made to this sad state of affairs.

I was wondering, though, if things had cooled off.

There is still low HPV uptake. But it’s unclear what the cause is.

Maybe the issue is still a polarizing one.

But even without continuing controversy one would expect rates to be lower insofar as the vaccine still isn’t mandatory outside of DC, Virginia and (recently) Rhode Island.

In addition, there’s reason to believe that pediatricians are gun shy to recommend the vaccine b/c of their recollection of getting burned when the vaccine was introduced.  Their reticence might have outlived the continuing public ambivalence, and now be the source of lower-than-optimal coverage.

So I plotted perceptions of various risk, measured with the Industrial Strength Risk Perception measure, in relation to right-left political outlooks.

I put the biggies—global warming, and fracking (plus terrorism, since I mentioned that yesterday and the issue generated some discussion)--in for comparison.

Also, childhood vaccinations, which as, I've discussed in the past, do not generate a particularly meaningful degree of polarization.

So what to make of this?

Obviously HPV is much less polarizing than the “biggies.”

But the degree of division on HPV doesn’t strike me, at least, as trivial.

Political division on the risks posed by other childhood vaccines is less intense, and still trivial or pretty close to it, particularly insofar as risk is perceived as “low” pretty much across the spectrum.  In truth, though, it strikes me as a tad bigger than what I’ve observed in the past (that’s worrisome. . . .).

But that’s all I have to say for now!

What do other think?

Here, btw, are the wordings for the ISRPM items: 

TERROR. Terrorist attacks inside the United States

FRACKING. “Fracking” (extraction of natural gas by hydraulic fracturing)

VACRISK. Vaccination of children against childhood diseases (such as mumps, measles and rubella)

HPV. Vaccinating adolescents against HPV (the human human papillomavirus)

GWARMING. Global warming

 

Sunday
Jan242016

Weekend update: OMG-- we are now as politically polarized over cell phone radiation as over GM food risks!!!

Some "Industrial Strength Risk Perception Measure" readings from CCP/Annenberg Public Policy Center study administered this month: 

Click for bigger, 3d, virtual reality view!

Interesting but not particularly surprising that polarization over the risk associated with unlawful entry of immigrants rivals that on global warming, which has abated recedntly about as much as the pumping of CO2 into the atmosphere.

Interesting but not surprising to learn (re-learn, actually) that it's nonsense to say Americans are "more afraid of terrorism than climate change b/c the former is more dramatic, emotionally charged" etc. That trope, associated with the "take-heuristics-and-biases-add-water-and-stir" formula of "instant decision science," reflects a false premise: those predisposed to worry about climate change do in fct see the risk it poses as bigger than that posed by domestic terrorism.

And completely boring at this point to learn form the 10^7 time that there is no political division over GM food risk in the general public, despite the constant din in the media and even some academic commentary to this effect.  

Consider this histogram:

The flatness of the distribution is the signature of the sheer noise associated with responses to GM food survey questions, the administration of which, as discussed seven billion times in this blog (once for every two regular blog subscribers!) is an instance of classic "non-opinion" polling. 

Ordinary Americans--the ones who don't spend all day reading and debating politics (99% of them)-- just don't give GM food any thought.  They don't know what GM technology is, that it has been a staple of US agricultural production for decades, and that it is in 80% of the foodstuffs they buy at the market.  

They don't know that the House of Reps passed a bipartisan bill to preempt state-labelling laws, which special-interest groups keep unsucessfully sponsoring in costly state referenda campaigns, and that the Senate will almost surely follow suit, presenting a bill that University of Chicago Democrat Barrack Obama will happily sign w/o more than 1% of the U.S. population noticing (a lot of commentators don't even seem to realize how close this non-issue is to completely disappearing).

Why the professional conflict entrepreneurs have failed in their effort to generate in the U.S. the sort of public division over GM foods that has existed for a long time in Europe is really an interesting puzzle.  It's much more interesting to try to figure out hypotheses for that & test them than to engage in a make-believe debate about why the public is "so worried" about them!

But neither that interesting question nor the boring, faux "public fear of GM foods" question was the focus of the CCP/APPC study.

Some other really cool things were.

Stay tuned!

Sunday
Jan172016

Status report on temporary CCP Lab relocation

we (my chief co-analyst & I) have arrived & resumed operations.  

A short photojournal of our relocation process:

1. Travelling (in custom-designed unit to avoid annoying paparazzi)
 
Inline image 1

2. Wrestling w/ research problem in new work space
 
Inline image 2

3. Taking a short break ....
 
Inline image 1

 

Friday
Jan152016

"I'm going to Jackson, I'm gonna mess around... " Well, Philly, actually

As of today, and until end of academic yr, will be at Annenberg Public Policy Center at University of Pennsylvania, to be a resident scholar in their amazing & inspiring Science of Science Communication project. 

Promise to write often!

Thursday
Jan142016

"Evidence-based Science Filmmaking Initiative," Rep. No. 1: Overview & Conclusions

In the last couple of posts (one on evolution believers' & nonbelievers' engagement with an evolution-science documentary, and another on measuring "science curiosity") I've summarized some of the findings from Study No. 1 of the Annenberg/CCP ESFI--"Evidence-based Science Filmmaking Initiative."

Those findings are described in more detail in a study Report, which also spells out the motivation for the study and its relation to ESFI overall. 

Indeed, the Report is an unusual document--or at least an unusual sort of document to share. 

It isn't styled as announcing to the world the "corroboration of" or "refutation" of some specified set of hypotheses.  It is in fact an internal report prepared for consumption of the the investigators in an ongoing research project, one that is in fact at a very preliminary stage!

Why release something like that?  Well, in part because even at this point in the investigation, we do think there are things to report that will be of interest to other scholars and reflective people generally, many of whom can be counted on to supply us w/ feedback that will itself make what we do next even more useful.

But in addition, one of the aims of the project, in addition to generating evidence relevant to questions of interest to professional science filmmakers, is to model the process of using evidence-based methods to answer those very questions.

As explained in the ESFI "main page," the project is itself meant to supply evidence relevant to the hypothesis that the methods distinctive of the science of science communication can make a positive contribution to the craft of science filmmaking by furnishing those engaged in it with the information relevant to the exercise of their professional judgment. 

Of course, those engaged in ESFI, including its professional science communication members, believe (with varying levels of confidence!), that in fact the science of science communication can make such a contribution; but of course,  too, others, including other professional science filmmakers, are likely to disagree with this conjecture.

I wouldn't say "no point arguing about it" just b/c reasonable, and informed, people can disagree.

But I would say that these are exactly the conditions in which the argument will proceed in a more satisfactory way with additional information of the sort that can be generated by science's signature methods of disciplined observation, reliable measurement, and valid inference.

Hence ESFI: Let's do it -- and see what  a collaboration between professional science filmmakers and allied communicators, on the 1 hand, and & "scientists of science communication" on the other, produces.  Then, on the basis of that evidence, those who are involved in science filmmaking can use their own reason to judge for themselves what that evidence signifies, and update accordingly their assessments of the utility of integrating the science of science communication into the craft of science filmmaking (not to mention related forms of science communication, like science journalism).

Precisely b/c the Report is an internal research document that takes stock of early findings in a multi-stage project, it furnishes a glimpse of the project in action.  It thus gives those who might consider using such methods a chance to form a more concrete picture of what these practices look like, and a chance to use their own experience-informed imaginations to assess what they might do if they could add evidence-based methods to their professional tool kits.

But of course this is only the start-- only the first Report, both of results and of the experience of doing evidence-based filmmaking.

A. Overview and summary conclusions

This report summarizes the preliminary conclusions of Study No. 1 in the Annenberg/CCP “Evidence-based Science Filmmaking Initiative.” The goal of the initiative is to promote the integration of the emerging science of science communication into the craft of science filmmaking. Study No. 1 involved an exploratory investigation of viewer engagement with an excerpt from Your Inner Fish, a documentary on human evolution.

The study had two objectives.

One was to gather evidence relevant to an issue of debate among science filmmakers: what explains the perceived demographic homogeneity of the audience for high-quality documentaries featured on NOVA, Nature, and similar PBS shows? Is the answer the distribution of tastes for learning about scientific discovery in the general population, or instead some feature of those shows collateral to their science content that makes them uncongenial to individuals who subscribe to certain cultural styles?

The other study objective was to model how evidence-based methods could be used by science filmmakers. Hard questions—ones for which the number of plausible answers exceeds the number of correct ones—are endemic to the activity of producing science films. By testing competing conjectures on an issue of consequence to their craft, Study No. 1 illustrates how documentary producers might use empirical methods to enlarge the stock of information pertinent to the exercise of their professional judgment in answering such questions.

Principal conclusions of Study No. 1 include:

1. By combining appropriately subtle self-report items with behavioral and performance-based ones, it is possible to construct a valid scale for measuring individuals’ general motivation to consume information about scientific discovery for personal satisfaction. Desirable properties of the “Science Curiosity Scale” (SCS) include its high degree of measurement precision, its appropriate relationship with science comprehension and other pertinent covariates, and (most importantly) its power to predict meaningful differences in objective manifestations of science curiosity.

2. By similar means, one can construct a satisfactory scale for measuring viewer engagement with material such as that featured in the YIF clip. Such a scale was again formed by combining self-report and objective measures, including duration of viewing time and requested access to the remainder of the documentary. Designated the “Engagement Index” (EI), the scale had the expected relationships with education and general science comprehension. The strongest predictor of EI was the study subjects’ SCS scores.

3. Engagement with the clip did not vary to a meaningful degree among subjects who had comparable SCS scores but opposing “beliefs” about human evolution. Evolution “believers” and “nonbelievers” with high SCS scores formed comparably positive reactions to the YIF clip. The show didn’t “convert” the latter. But like “believers” with high SCS scores, high-scoring “nonbelievers” were very likely to accept the validity of the science featured in the clip. This finding is consistent with research suggesting that professions of “disbelief” in evolution are an indicator of cultural identity that poses no barrier to engagement with scientific information on evolution, so long as that information itself avoids mistaking exacting professions of “belief” for communicating knowledge.

4. Engagement with the show did vary across culturally identifiable groups. The members of one cultural group, whose members are in fact distinguished in part by their pro-technology attitudes, appeared to display less engagement the clip than was predicted by their SCS scores. This finding furnishes at least some support for the conjecture that some fraction of the potential audience for science documentary programing is discouraged from viewing it by uncongenial cultural meanings collateral to the science content of such programming.

5. But additional, more fine-grained analysis of the data is necessary. In particular, the science-communication-professional members of the research team must formulate concrete, alternative hypotheses about the identity of culturally identifiable groups who might well be responding negatively to collateral cultural meanings in the clip. Those hypotheses can in turn be used by the science-of-science-communication team members to develop more fine-tuned cultural profiles that can be used to probe such conjectures.

6. Depending on the results of these additional analyses, next steps would include experimental testing that seeks to modify collateral meanings or cues in a manner that eliminates any disparity in engagement among individuals of diverse cultural identities who share a high level of curiosity about science.

 

 

Wednesday
Jan132016

"SCS_1.0": Measuring science curiosity

 Yesterday, I discussed how evolution "believers" and "nonbelievers" reacted to a cool evolution-science  documentary. The data I described came from Study No. 1 of the Annenberg Public Policy Center/CCP "Evidence-based Science Filmmaking Initiative" (ESFI).

That data suggested that "belief" in evolution wasn't nearly as important to engagement with the documentary (Your Inner Fish, an award-winning film produced by ESFI collaborator Tangled Bank Studios) as was science curiosity.

Today I'll say a bit more about how we measured science curiosity.

Developing a valid and reliable science curiosity scale was one of the principal aims of Study No. 1.  As conceptualized here, science curiosity is not a simple transient state (Loewenstein 1999) but instead a general disposition, variable in intensity across persons, that reflects the motivation to seek out and consume scientific information for personal pleasure.

Obviously, a measure of this disposition would furnish science journalists, science filmmakers, and related science-communication professionals with a useful tool for perfecting the appeal of their work to those individuals who value it the most. But it could also make myriad other contributions to the advancement of knowledge. 

A valid science curiosity measure could be used to improve science education, for example, by facilitating investigation of the forms of pedagogy most likely to promote its development and harness it to promote learning (Blalock, Lichtenstin, Owen & Pruski 2008). Those who study the science of science communication (Fischhoff & Scheufele 2014; Kahan 2015) could also use a science curiosity measure to deepen their understanding of how public interest in science shapes the responsiveness of democratically accountable institutions to policy-relevant evidence.

Indeed, the benefits of measuring science curiosity are so numerous and so substantial that it would be natural to assume researchers must have created such a measure long ago.  But the simple truth is that they have not. 

“Science interest” measures abound. But every serious attempt to assess their performance has concluded that they are psychometrically weak and, more important, not genuinely predictive of what they are supposed to be assessing—namely, the disposition to seek out and consume scientific information for personal satisfaction (Blalock et al 2008; Osborne, Collins & Simons 2003).

ESFI assumptions: "1. Mathematics is the language of nature. 2. Everything around us can be represented and understood through numbers. 3. If you graph these numbers, patterns emerge. Therefore: ..." click on this, or you are a big fat loser!!!!!ESFI’s “Science Curiosity Scale 1.0” (SCS_1.0) is an initial step toward filling this gap in the study of science  communication.  The items it comprises, and the process used to select (and combine) them, self-consciously address the defects in existing scales.

One of these is the excessive reliance on self-report measures. Existing scales relentlessly interrogate the respondents on the single topic of their own attraction to or aversion toward information on scientific discovery: “I am curious about the world in which we live,” “I find it boring to hear about new ideas,” “I get bored when watching science programs on TV,” etc.  Items like these are well-known to elicit responses that exaggerate respondents’ possession of desirable traits or attributes.

To counteract this dynamic, SCS_1.0 disguises its objectives by presenting itself as a general “marketing” survey.

Individual self-report items relating specifically to science were thus embedded in discrete blocks or modules, each consisting of ten or more items relating to an array of “topics” that “some people are interested in, and some people are not.” Items were presented in random order, each on a separate screen. 

There was thus no reason for subjects to suspect that their motivation to learn about science was of particular interest, nor any opportunity for them to adjust the responses across items in a manner that overstated their interest in it.  A similar strategy was used to gather information on behavior reflecting such an interest, including visits to science museums, attendance at public science lectures, and the reading of books on scientific discovery.

SCS_1.0 also featured an objective performance measure. 

Well into the survey, subjects were advised that we were interested in their reactions to a news story “of interest” to them.  In order to assure that the story was one that in fact matched their interests, they were furnished with discrete news story sets, the shared subject matter of which was be identified by a header and reinforced by the individual story headlines and graphics. One set consisted of science stories; the others ones on popular entertainment, on sports, and on financial news. 

Subjects, we anticipated, were likely to find the prospect of reading a story and answering questions about it burdensome.  Accordingly, the selection of the science set rather than one of the others would be a valid indicator of genuine science interest . Responses to this task were then used to validate the self-reported interest items to help furnish assurance the genuineness of the latter.

When combined, the items displayed the requisite psychometric properties of a valid and reliable scale.  Their unidimensional covariance structure warranted the inference that they were measuring the same latent disposition.  Formed with item response theory, the composite scale weighted particular items in relation to the level of the disposition that responses to them evinced. The result was an index—SCS_1.0—that reflected a high degree of measurement precision along the entire population distribution of that trait (Embretson & Paul 2000).

Are *you* science curious? If so, you'll click on this to see how SCS predicts behavior evincing a desire to learn of scientific discoveries!Finally and most importantly, SCS_1.0 was behaviorally validated

As detailed in ESFI Study Report No. 1, subjects were instructed to watch a 10-minute clip from the science documentary Your Inner Fish.  SCS_1.0 strongly predicted engagement with the clip as reflected not only in self-reported interest but also in objective measures such as duration of viewing time and subjects’ election (or not) to be furnished free access to the documentary as a whole.

SCS_1.0 is by no means understood to be an ideal science curiosity measure.  Additional testing is necessary, both to assure the robustness of the scale and to refine its powers to discern the motivation to seek out and consume science information for pleasure.

Moreover, SCS_1.0 was self-consciously designed to assess this disposition in adult members of the public; variants would be appropriate for specialized populations including elementary or secondary school students.

But what SCS_1.0 does do, we believe, is initiate a process that there's every reason to believe will generate measures of genuine value to researchers interested in assessing science curiosity in the general public and in specialized subpopulations.  The researchers associated with CCP’s ESFI and other evidence-based science communication initiatives are eager to participate in that process.  But they are also eager to stimulate others to participate in it either by building on and extending SCS_1.0 or by developing alternatives that genuinely predict behavior that manifests the motivation to seek out and consume scientific information.

Existing “science interest” measures just don’t do that.  SCS_1.0 shows that it is possible to do much better.

References

Besley, J.C. The state of public opinion research on attitudes and understanding of science and technology. Bulletin of Science, Technology & Society, 0270467613496723 (2013).

Blalock, C.L., Lichtenstein, M.J., Owen, S., Pruski, L., Marshall, C. & Toepperwein, M. In Pursuit of Validity: A comprehensive review of science attitude instruments 1935–2005. International Journal of Science Education 30, 961-977 (2008).

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, N.J.: L. Erlbaum Associates.

Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, 14, 1-12 (2015).

Loewenstein, G. The psychology of curiosity: A review and reinterpretation. Psychological Bulletin 116, 75 (1994).

National Science Foundation. Science and Engineering Indicators, 2010 (National Science Foundation, Arlington, Va., 2014).

Osborne, J., Simon, S. & Collins, S. Attitudes towards science: A review of the literature and its implications. International journal of science education 25, 1049-1079 (2003).

Thomas G Reio Jr, Joseph M Petrosko, Albert K Wiswell & Juthamas Thongsukmag, The Measurement and Conceptualization of Curiosity, 167 The Journal of Genetic Psychology 117-135 (2006).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

 

 

 

Page 1 ... 2 3 4 5 6 ... 37 Next 20 Entries »