follow CCP

Recent blog entries
Friday
Feb222013

Is A. Gelman trying to provoke me, or is that just my narcissism speaking?


Friday
Feb222013

The false and tedious "defective brain" meme

I know expressing exasperation doesn't really accomplish much but:

Please stop the nonsense on our “defective brains.”

Frankly, I don’t know why journalists write, much less why newspapers and newsmagazines continue to publish, the same breathless, “OMG! Scientists have determined we’re stupid!!!” story over & over & over. 

Maybe it is because they assume readers are stupid and will find the same the same simplistic rendering of social psychology research entertaining over & over & over.

Or maybe the writers who keep recycling this comic book account of decision science can't grasp the grownup version of why people become culturally polarized on risk and related facts—although, honestly, it’s really not that complicated!

Look: the source of persistent controversy over risks and related facts of policy significance is our polluted science communication environment, not any defects in our rationality.

People need to (and do) accept as known by science much much much more than they could possibly understand through personal observation and study.  They do this by integrating themselves into social networks—groups of people linked by cultural affinity—that reliably orient their members toward collective knowledge of consequence to their personal and collective well-being.

The networks we rely on are numerous and diverse—because we live in a pluralistic society (as a result, in fact, of the same norms and institutions that make a liberal market society the political regime most congenial to the flourishing of scientific inquiry).  But ordinarily those networks converge on what’s collectively known; cultural affinity groups that failed to reliably steer their members toward the best available evidence on how to survive and live well would themselves die out.  

Polarization occurs only when risks or other facts that admit of scientific inquiry become entangled in antagonistic cultural meanings. In that situation, positions on these issues will come to be understood as markers of loyalty to opposing groups.  The psychic pressure to protect their standing in groups that confer immense material and emotional benefits on them will then motivate individuals to persist in beliefs that signify their group commitments.

They'll do that in part by dismissing as noncredible or otherwise rationalizing away evidence that threatens to drive a wedge between them and their peers. Indeed, the most scientifically literate and analytically adept members of these groups will do this with the greatest consistency and success.  

Once factual issues come to bear antagonistic cultural meanings, it is perfectly rational for an individual to use his or her intelligence this way: being "wrong" on the science of a societal risk like climate change or nuclear power won't affect the level of risk that person (or anyone else that person cares about): nothing that person does as consumer, voter, public-discussion participant, etc., will be consequential enough to matter. Being on the wrong side of the issue within his or her cultural group, in contrast, could spell disaster for that person in everday life.

So, in that unfortunate situation, the better our "brains" work, the more polarized we'll be. (BTW, what does it add to these boring, formulaic "boy, are humans dumb!" stories to say "scientists have discovered that our brains  are responsible for our inability to agree on facts!!"? Where else could cognition be occurring? Our feet?!)

The number of issues that have that character, though, is miniscule in comparison to the number that don’t. What side one is on on pasteurized milk, fluoridated water, high-power transmission lines, “mad cow disease,” use of microwave ovens, exposure to Freon gas from refrigerators, treatment of bacterial diseases with antibiotics, the inoculation of children against Hepatitis B, etc. et. etc., isn't viewed as a a badge of group loyalty and commitment for the affinity groups most people belong to. Hence, there's not meaningful amount of cultural polarization on these issues--at least in the US (meaning pathologies are local; in Europe there might be cultural dispute on some of these issues & not on some of the ones that divide people here).

The entanglement of facts that admit of scientific investigation—e.g., “carbon emissions are heating the planet”; “deep geologic isolation of nuclear wastes is safe”—with antagonistic meanings occurs by a mixture of influences, including strategic behavior, poor institutional design, and sheer misadventure. In no such case was the problem inevitable; indeed, in most, such entanglement could easily have been avoided.

These antagonistic meanings, then, are a kind of pollution in the science communication environment.  They disable the normal and normally reliable faculties of rational discernment by which ordinary individuals recognize what is collectively known.

One of the central missions of the science of science communication in a liberal democratic state is to protect the science communication environment from such contamination, and to develop means for detoxifying that environment when preventive or protective measures fail.

This is the account that is best supported by decision science. 

And if you can’t figure out how to make that into an interesting story, then you are falling short in relation to the craft norms of science journalism, the skilled practitioners of which continuously enrich human experience by figuring out how to make the wonder of what's known to science known by ordinary, intelligent, curious people.

Thursday
Feb212013

Local adaptation & field testing the science of science communication

from Making Climate-Science Communication Evidence-based—All the Way Down:

Consider this paradox. If one is trying to be elected to Congress in either Florida or Arizona, it is not a good idea to make “combating global climate change” the centerpiece of one’s campaign. Yet both of these states are hotbeds of local political activity focusing on climate adaptation. A bill passed by Florida’s Republican-controlled legislature in 2011 and signed into law by its tea-party Governor has initiated city- and county-level proceedings to formulate measures for protecting the state from the impact of projected sea-level rises, which are expected to be aggravated by the increased incidence of hurricanes.

Arizona is the site of similar initiatives. Overseen by that state’s conservative Governor (who once punched a reporter for asking her whether she believed in global warming), the Arizona proceedings are aimed at anticipating expected stresses on regional water supplies.

Climate science—of the highest quality, and supplied by expert governmental and academic sources—is playing a key role in the deliberations of both states.  Florida officials, for example, have insisted that new nuclear power generation facilities being constructed offshore at Turkey Point be raised to a level higher than contemplated by the original design in order to reflect new seal-level rise and storm-activity projections associated with climate change. The basis of these Florida officials’ projections are the same scientific models that Florida Senator Marco Rubio, now considered a likely 2016 presidential candidate, says he still finds insufficiently convincing to justify national regulation of carbon emissions.

The influences that trigger cultural cognition when climate change is addressed at the national level are much weaker at the local one. When they are considering adaptation, citizens engage the issue of climate change not as members of warring cultural factions but as property owners, resource consumers, insurance policy holders, and tax payers—identities they all share. The people who are furnishing them with pertinent scientific evidence about the risks they face and how to abate them are not the national representatives of competing political brands but rather their municipal representatives, their neighbors, and even their local utility companies.

What’s more, the sorts of issues they are addressing—damage to property and infrastructure from flooding, reduced access to scarce water supplies, diminished farming yields as a result of drought—are matters they deal with all the time. They are the issues they have always dealt with as members of the regions in which they live; they have a natural shared vocabulary for thinking and talking about these issues, the use of which reinforces their sense of linked fate and reassures them they are working with others whose interests are aligned with theirs. Because they are, in effect, all on the same team, citizens at the local level are less likely to react to scientific evidence in defensive, partisan way that sports fans do to contentious officiating calls.

Nevertheless, it would be a mistake to assume that local engagement with adaptation is impervious to polarizing forms of motivated reasoning. The antagonistic cultural meanings that have contaminated the national science communication environment could easily spill over into local one as well. Something like this happened—or came close to it—in North Carolina, where the state legislature enacted a law that restricts use of anything but “historical data” on sea-level in state planning. The provision got enacted because proponents of adaptation planning legislation there failed to do what those in the neighboring state of Virginia did in creating a rhetorical separation between the issue of local flood planning and “global climate change.” Polarizing forms of engagement have bogged down municipal planning in some parts of Florida—at the same time as progress is being made elsewhere in the state.

The issue of local adaptation, then, presents a unique but precarious opportunity to promote constructive public engagement with climate science. The prospects for success will turn on how science is communicated—by scientists addressing local officials and the public, certainly, but also by local officials addressing their constituents and by myriad civic entities (chambers of commerce, property owner associations, utility companies) addressing the individuals whom they serve. These climate-science communicators face myriad challenges that admit of informed, evidence­-based guidance, and they are eager to get guidance of that kind. Making their needs the focus of field-based science-communication experiments would confer an immense benefit on them.

The social science researchers conducting such experiments would receive an immense benefit in return. Collaborating with these communicators to help them protect their science communication environment from degradation, and to effectively deliver consequential scientific information within it, would generate a wealth of knowledge on how to adapt insights from lab models to the real world.

There are lots of places to do science communication field experiments, of course, because there are lots of settings in which people are making decisions that should be informed by the best available climate science. There is no incompatibility between carrying out programs in support of adaptation-science communication simultaneously with ones focused on communicating relevant to climate policymaking at the national level.

On the contrary, there are likely to be numerous synergies. For one thing, the knowledge that adaptation-focused field experimentation will likely generate about how to convert laboratory models to field-based strategies will be relevant to science communication in all domains. In addition, by widening the positive exposure to climate science, adaptation-focused communication is likely to create greater public receptivity to open-minded engagement with this science in all contexts in which it is relevant. Finally, by uniting on a local level all manner of groups and interests that currently occupy an adversarial relation on the climate change issue nationally, the experience of constructive public engagement with climate science at the local level has the potential to clear the air of the toxic meanings that have been poisoning climate discourse in our democracy for decades.

Tuesday
Feb192013

On science communication & the job of the scientist: a thoughtful response from a scientist

Below is an extremely thoughtful comment relating to my 2d post on my experience in giving a presentation to a group of public-spirited citizen scientists at the  North American Carbon Program a couple of weeks ago.

Just by way of context: I stressed that it is a mistake to think that the job of the scientist is to communicate as opposed to doing science -- not because scientists shouldn't communicate with the public (the ones who take that on that added demand are heroes in my book) but because a democratic society that expects or relies on its scientists to bear the responsibility for making what's known to science known to citizens necessarily doesn't get the central tenets of the science of science communication: (1)  that there is distinction between "doing" and "communicating" valid science; and (2)  that the latter demands its own science, its own professional training, and its own reliable implementing institutions and practices. Not getting (1) & (2) is the source of the persistent public conflict on climate science & risks squandering in general what is arguably our society's greatest asset -- the knowledge that science confers on how to secure collective health, safety, and prosperity.

But the one thing I am more confident is correct than this argument is that the surest means for remedying the deficit in our society's science-communication intelligence is through the process of conjecture and refutation that is the signature of science. Let's articulate as many experience-informed hypotheses as we can; and let's test them by doing and modeling them within our universities and within all the other settings in which science and science-informed policymaking are practiced.

So consider this inspired account of what's to be done. If it weren't an "n of 1," I myself would accept that it in itself refutes my claim that it's a mistake to think that we shouldn't conflate excellence in doing and communicating science.

from Paul Shepson:

Dan - you said in your revised post, that "Their job is not to communicate their science to non-experts or members of the public." This did strike me as a weird thing to say. When I am doing science, I try to do it in a scientifically defensible way. When I am communicating to the public about science, I try to do it in a way in which they learn something, and hopefully laugh a few times. But what my job is, that's for me and my employer to negotiate, and hopefully, for me to be creative about. My job is to feel good about what I do, and at the same time hopefully help people, and get to eat. But, as I said in my email to you, it is indeed our responsibility to do exactly this (communicate to members of the public), as I said, especially when the scientific results have large social, ethical, economic, human and ecosystem health impacts. And, it is the case that Federal agencies, e.g. NSF, that fund the scientific community REQUIRE that we communicate our science outside of the scientific community.

For me, doing this is an integral part of who I am as a scientist. I have learned, from a variety of personal experiences, like marriage counseling, and communicating about climate change to Rotarians, etc., that it is very important to "get into the heads of" the members of the audience. But, until your presentation at the NACP meeting, I didn't fully have the jargon about, and the better informed ideas about, the importance and impact of cultural cognition. This has helped me a great deal, and I am sure it will in future presentations; I am already implementing changes (in my head) as a result of your blogs and your presentation. But I don't typically expect scientists to communicate, as you have said, the "validity of valid science". Scientists more often are communicating about the process of science, which can be far more interesting and entertaining, than trying to hammer home the idea that some set of climate science-related conclusions are valid. For me, a quantitative scientist, to discuss the "validity" of my work requires the use of error analysis, and thus, for a general audience, might require them to use stimulants of some sort. People sometimes use the word valid or validate when referring to one of the most important tools of science, the model. But, models are almost never valid, they are a representation and most often simply a test of our understanding of a natural system, such as the Earth. It is hard for me to imagine an Earth System model as ever being valid. But what is fun to tell people about is the process of finding things out, to use a Feynman-ian-like term, since you have referred to Feynman in your blog. People will listen to stories about how hypotheses are developed, e.g. about warming in the Arctic, and then about how you went there to test it, and observed a similar warming, and a similar loss of sea ice, but how that loss of sea ice is occurring faster than the models predicted, and then how that comparison led you to think harder about what is wrong with a model. Models aren't ever valid, they are wrong, and it is learning about the wrongness that leads to scientific progress. The finding things out, and the wrongness is the excitement of science. People love to hear stories about what an Inupiat Eskimo taught you about ice that you never learned from other scientists, and how that helped you rethink your model. Science is a process, not a bunch of end results that are either valid, or not. Ah, but enough ranting.

Regarding making my University bear its share of the burden, I can't really make my University do much of anything. I have tried! But, I can motivate myself to try to inspire young people about the process of science, and to tweak peoples minds to think about things in a different way, and hopefully, in a positive, constructive way. So, when I asked you about taking a renewable energy engineer with me to the Rotary Club, I was suggesting that it might be effective for people who value individualism and a hierarchical world to see the unprecedented investment opportunities in renewable energy, which everyone on the planet will likely eventually need. Its a darn big market! And that pursuit of such investment opportunities might "symbolize human resourcefulness", in a way that is fully consistent with the values of the cultural group with which they identify. Shouldn't we try to take Warren Buffet with us to the Rotary Club? I think the climate science community should be communicating that everyone can win, and that includes the cultural groups with which they strongly identify, in the pursuit of the solutions to climate change.

While you might not think that I am, I will take the liberty of saying thank you for helping me to think more clearly.

 

Monday
Feb182013

The two-channel strategy/model for satisfying the public's appetite to know what is known by science

Below is a summary of my remarks (or what I can remember of them!) at the AAAS panel I participated in on Friday on Engaging Lay Publics in Museums on Provocative Societal Questions Related to Science. My slides are here.  It is part 1 of a 2-part series; in the 2d part, I'll summarize the presentations of co-panelists Lucy Kirschner and Elizabeth Kunz Kollman on a truly astonishing exploratory field-experiment that the Boston Museum of Science conducted in the form of an exhibit designed to promote reflection on the dynamics of public engagement with science relevant to controversial policy issues

A two-channel strategy (model!) for enlarging satisfaction of the public appetite to know what’s known

 1. There are two situations in which professional science communicators get into trouble. The first is when they rely entirely on their intuitions unfortified with evidence. The second is when they ask social scientists what to do based on the evidence and social scientists actually purport to tell them

The problem with the evidence-free approach is not that professional communicators don’t have any sound intuitions about what to do; it’s that they have too many of them. Their experience-informed insights are always plausible, but here, as elsewhere with complicated social matters, more things are plausible than are true. Hypothesis, observation, and measurement are needed to cull the latter from the former.

The problem with communicators relying on the social scientists to tell them what to do is that the social scientists don’t have practical, experience-based insights into communication. They have models. The models, if they are well-designed, identify the mechanisms of consequence in particular communication settings. Those mechanisms are important for determining which of the communicators’ plausible intuitions are most likely to work. But turning the models that produced the mechanisms are not themselves communication materials. Communicators need to turn those models into materials that till produce those effects in the real-world. Social scientists can’t do it for them: they don’t have evidence on that, and if they just try to guess what will work, they will say many implausible (also empty, self-contradictory) things because they lack local knowledge.

I certainly don’t have reliable intuitions on how to communicate science in a manner that satisfies the appetite of the public (or the appetite of that portion of it that has one) to enjoy the thrill and wonder of knowing what’s known. I am part of that public, and recognize with admiration and gratitude the special craft sense of those who feed the curiosity of me and others who share my interest. 

Those who have this special professional skill are intent all the same on improving their art.  I have through empirical study acquired knowledge of some of the mechanisms that shape public engagement with science.  Is what I know something that will help these communicators? Once they’ve heard what I said, they should tell me.

2.  The science of science communication can help communicators only through evidence-based experiments based on social scientist/practitioners collaborationBased on what the social scientist knows about mechanisms, the communicator will be filled with ideas about how to fashion communication strategies that successfully reproduce the effects of the social scientists’ models in the world. So social scientists shouldn’t tell communicators what to do; communicators should tell social scientists what they think will work. Because here too the communicators will have more plausible intuitions than can be true, their proposals should be regarded as hypotheses. The social scientists can then help the communicators to structure their programs as experiments, ones that generate observations that can be measured and that support valid inferences about what does and doesn’t work.  They can use that information. But they should also share it, so others can learn too.

3.  A two-channel strategy. The two channel-strategy is a model of communicating science. It tests a hypothesis about how mechanisms associated with science communication conflicts can be neutralized.  The basic idea is that ordinary members of the public receive science information along two channels. One transmits content. The other transmits meaning: what is the significance, if any, for my standing in my cultural group associated with crediting or discrediting this information?  Conflicts over climate change reflect a conflict between the signals being transmitted along the content channel and the meaning channel; many citizens “push back”—they don’t engage the communication attentively and with an open-mind—because the information conveys meanings that threaten their cultural identity.  The CCP experiment on “geoengineering and the science communication environment” is a model of how conscious regulation of the information on the meaning channel can improve engagement with content transmitted along the content channel.

4.  The two-channels model and satisfying the public appetite to know what’s known.  Some professional science communicators—including science documentary producers and science museum directors—subscribe e to what might be called the “missing science audience thesis” (MAT): that the number of people who enjoy their materials is smaller than the total who possess an appetite to know what’s known and who would find it satisfied (amply and exhilaratingly) by the work these communicators do.  Could the two channel-model be of value in overcoming MAT?

The reason to surmise it might be is that the demographic characteristics of these communicators’ current audience suggest the underrepresentation of people of the same cultural style who react dismissively to climate science. These individuals—many of whom have hierarchical and individualistic worldviews—are not anti-science (no significant portion of the American public actually is): they are science literate and share in the prevailing positive view of scientists in American society; they have admiration for technological innovation, including nuclear power, nanotechnology and geoengineering; and like everyone else, they favor making use of science in public policymaking—indeed, like their opponents in culturally factionalized debates over policy-relevant science believe (sometimes correctly, sometimes incorrectly) that the positions that predominate in their group are consistent with scientific consensus.  The two-channel strategy suggests that communicators can tap into the latent receptivity of these citizens to the content of scientific information on climate change by combining that information with cultural meanings that are congenial rather than hostile to their worldviews.

Could MAT originate in an unintended conflict between the information being conveyed along the content and meaning channels? If so, what elements of the information being communicated generate the hostile meanings? How might those be modified to make the signal transmitted along the meaning channel more congenial without changing the one being conveyed along the content channel—since, indeed, the supposition is that the content of these communicators’ materials are exactly what would satisfy the appetite of these citizens to know what’s known?

The communicators at the Boston Museum of science aren’t asking me those questions; they are showing me and others their own answers, which are the animating conjectures of practical field experiments conducted as part of their own work.  They are also sharing with others in their extraordinary profession the valuable knowledge that their efforts have generated.

To me, the results bear all the signatures of the scientific advancement of knowledge.

And not surprisingly, given that these field experimenters are also expert communicators, their results inspire in me the same thrill and awe that I experience whenever I cross the bridge that their craft supplies between my curiosity and the wondrous discoveries of science.

Saturday
Feb162013

Is it plausible that higher cognitive reflection (system 2) increases polarization

This is from correspondence with @Joshua, who says:

I"m having difficulty understanding [your claim that "in a polluted science communication environment, there will be the equivalent of a psychic incentive to form group-congruent beliefs. People who are higher in science comprehesnion will be even better at doing that."]

When you say "better at doing that," doesn't it mean, essentially, better at being polarized and hence, more polarized? If someone is driven to acquire more data by virtue of a system 2 orientation, and accordingly is better at filtering those increased data to confirm bias, doesn't that necessarily translate into being more polarized?

That doesn't quite fit with my non-empirical assessment of human nature. My guess is that scientific literacy probably has little effect on one's tendency towards polarization (not zero effect - I assume that "literacy" as a general characteristic on a macro-scale is associated with less antagonistic behavior) , but someone who is more unequivocal in their viewpoint is more likely to seek out information to confirm their bias (because their identity is more closely associated with that viewpoint and they have more to lose if they're wrong) - and even more so if they happen to have a system 2 orientation.

My response:

I think you've got it -- "it" being my claim: (1) that in an environment in which positions on risk or facts of policy-significance become suffused with identity-signifying meanings, there will be cultural polarization b/c of the pressure members of diverse communities experience to protect their standing in the group; and (2) such polarization will be greater among individuals who are most disposed and able to engage in conscious, effortful information processing (system 2), because people who are better in general at making use of information to advance their interests will, in this polluted envirionment, use those abilities to attain a tighter fit between their beliefs and their identities (through motivated search for information, through closer scrutiny of messages that might contain meanings threatening to or affirming of group identity, & through formulation of innovative counterarguments).

You say you have trouble with this claim b/c it doesn’t fit your own observation & sense of human nature?

My guess would be that this position both fits many impressions most people have about how things work, and is at odds with many impressions they have formed that suggest something else could be going on. I certainly feel this way.

This is the situation we are in usually -- possessed of more plausible conjectures about what is going on than can really be (helpfully) true. That's why we should hypothesize, measure, observe, & report; it is why we shouldn't tell stories, that is, confidently present what is imaginative conjecture embroidered w/ bits of psychological research as "scientifically established" accounts that disguise uncertainty and stifle continued investigation.

So I don't offer my account as any sort of "conclusively proven!" show stopper. I offer it as my hypothesis.

And I offer both the "science comprehension & polarization" study and the "cognitive reflection, motivated reasoning, and ideology" experiment as evidence that I think gives us reason to treat this hypothesis as more likely true (or closer to useful truth) than alternatives. Then I wait for others to produce more evidence that we can use to adjust further. But if I have to act in the meantime, I do what seems sensible based on my best current understanding of what's true.

So I am content if people start with the idea, "this expressive rationality thesis (ERT) you keep talking about-sure, it's plausible, but what's the evidence that that rather than [9 other plausible conjectures] is the source of the problem?"

If someone says, "ERT is not plausible," I'm puzzled; most of us have enough common material in our registers of casual observation to be able to recognize how people could believe one or another of the things that any one of us finds plausible.

But if that person finds ERT implausible, I will simply say to her, "well, still consider my evidence, please. I imagine after you do you will still not be convinced ERT is the source of disputes over climate change & nuclear power & the like, since you are starting w/ prior odds so long against this being so. But my hope is that you'll conclude that the evidence I have collected is sound and supplies a likelihood ratio > 1 in support of ERT, and that you will then at least have posterior odds that are less long against it."

If the person then accepts the invitation, considers the evidence open-mindedly, and gives it the weight that it is due under appropriate criteria for judging the validity of empirical proof, that will make me happy, too.

As long as we both keep iterating & updating, we'll converge eventually. 

Thursday
Feb142013

Terrorism, climate change, and surprise

In one of the enlightening "drunkard's walks" that the internet enables, I bumped into this fascinating blog post at the site Grow this City. Shouldn't one show one's gratitude for the gratuitous conferral of this sort of benefit by making an effort to enable others to enjoy it too?  So I repost; and then offer a conversational response.

At a recent meeting of a class on climate change policy, my professor led a discussion on the psychology of climate change and why it is so difficult to motivate people to act on the dire warnings published by climate scientists.

The basis of our discussion was a set of three articles published by psychologists on the topic. Two were by Elke U. Weber: “Public Understanding of Climate Change in the United States” and “Why Global Warming Does Not Scare Us (Yet)”. A third was by Dan M. Kahan titled “The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change”. A couple lines from the abstract of one of Weber’s article’s sums up the conclusion that both she and Kahan reach:

“When people fail to be alarmed about a risk or hazard, they do not take precautions… The time-delayed, abstract, and often statistical nature of the risks of global warming does not evoke strong visceral reactions.”

Basically, people do not take action to prevent or prepare for climate change because climate change is not scary enough.

Reading those findings got me thinking – is there a phenomenon similar to climate change that does scare people?

Eureka! There is such a thing! It’s called Terrorism. And, unlike climate change, it scares the shit out of people.

The analogy between climate change and Terrorism holds up for these three reasons:

1. They are diffuse in their causes and in their harms.

2. Preventing them requires large-scale social coercion and massive diversions of resources.

3. They cannot be prevented with total certainty even if we employ all the coercion and resources we can muster.

I brought this idea up in class and might as well have detonated a flash-bang grenade. My peers were shell-shocked. Their ethical circuitry shorted out. A business major blurted, “Terrorism isn’t like climate change. It’s a big danger that we have to fight to defend our country.”

To this I said, “The chances of being injured or killed in an act of terror is very low. You have a better chance being struck by lightning.”

The business major countered, “Look at Oklahoma City, the World Trade Center, the Shoe Bomber. Terrorism happens all the time.”

I then suggested that it may be the case that the US government has acted more decisively and with more resources to the threat of terrorism than to the threat of climate change because the United States is a fossil fuel-based regime. The reason that there was such a thorough (and effective) propaganda campaign to justify the “War on Terror” was that it generated support for the invasion and decade-long wars in Iraq and Afghanistan. Those wars, I said, secured Middle Eastern oil for the United States, strengthening its fossil fuel-based regime. On the other hand, preventing climate change is not as strategically important to the USA, so our government has devoted more resources to fighting Terrorism than to addressing the problem of climate change.

My classmates went pale. My professor stayed silent. And the business major came at me again.

“The wars in Iraq and Afghanistan were about terrorism. They had nothing to do with oil. They made us more safe from terrorism.”

I said, “Come on, the idea that we invaded those countries because of oil is not a crazy one. It’s obvious.”

But my classmates looked at me like I was insane, like I had jumped on the big oval table in the middle of the room and defecated before them.

But the normally quiet girl to my right spoke up. “It might also have something to do with class. 9/11 blew up a skyscraper in Manhattan. Climate change hurts poor people first.”

But my professor, who has a JD from Stanford and an aversion to talking about class or speaking ill of the US government, intervened. He changed the subject, and ‘terrorism’ didn’t enter into the same sentence as ‘climate change’ from then on.

Bonus fact: the Iraq War has been more expensive than the anticipated cost of the Kyoto Protocol to the US.

1. This is a really compelling & cool anecdote that powerfully illustrates how intriguingly & oddly selective perceptions of risk are. Obviously, an element of the phenomenon is how unaware people (we!) normally are of how oddly selective our perceptions are — they just seem so given, obvious, we don’t notice.  The failure of people (like your classmates but everyone else, including you and me at one time or another)  to “get” how oddly selective risk perceptions are — to react in fact w/ incomprehension mixed with irritation — when this is pointed out is obviously bound up with whatever it is in us that makes us form such strange schedules of risk perception in the first place.

Two other cool things in the story: at least for a curious person, the surprise at discovering instances of the odd selectively & realizing that they beg for explanation are pleasurable; and for the curious person the disappointment of finding out that other people actually resist being made to confront the puzzle is offset by what that teaches her about shape of the pieces she needs to solve the puzzle.

2. The thesis — we overestimate terrorism risks relative to climate change ones because of the vivid an immediate character of the former and the less emotionally sensational, more remote character of the latter — is very plausible, because it's rooted, as you point out, in real dynamics of risk perception. For a wonderful essay that elaborates on this hypothesis (without presenting it as a hypothesis, unfortunately; conjecture is beautiful, and supplies the motivation for investigation, unless it is disguised as a “scientific, empirical fact,” in which case is risks stifling scientific, emprical engagement; you aren’t doing that, btw!), see Sunstein, C.R. On the Divergent American Reactions to Terrorism and Climate Change. Columbia Law Rev 107, 503-557 (2007).

3. I want to reciprocate the friendly gesture reflected in your sharing this genuinely engaging and thoughtful insight (and the infectious nature of the excitement of your discovery of it) by suggesting that I think that explanatioin is not quite right!

The paper of mine that you cite — “Tragedy of the Risk Perceptions Commons,” a working paper version of Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G., The polarizing impact of science literacy and numeracy on perceived climate change risks, Nature Climate Change 2, 732-735 (2012) — is actually meant to pit that hypothesis against a rival one.

You surmise — again, quite plausibly, in light of mechanisms of cognition that we know are very imporant for risk perception— that the public's relative ranking of terrorism and climate change risks is a consequence of the tendency of people to process information about risk heuristically, intuitively, emotionally (Kahneman’s “fast” system 1), as opposed to consciously, deliberately, analytically (“slow” system 2).

Our study presents evidence, though, that the disposition to think consciously, deliberately, analytically (to use system 2) doesn’t uniformly predict more concern about climate change. In fact, it predicts greater cultural polarization over climate change  risks and a whole bunch of other ones too! We treat this as evidence that public conflict or confusion over climate change risks is a consequence of “cultural cognition,” a dynamic that unconsciously motivates people to attend selectively to information about risk in patterns that reinforce their commitment to opposing groups. Those who see climate changes as higher in risk actually see terrorism risks as less of a concern for society. (Take a look, e.g., at the group variation reflected in this chaotic graphic. The effect only gets stronger as people's ability to engage in reflective, dispassionate analytical reasoning increases.

4. As I said, this observation is meant to reciprocate the spirit of your post. My aim is not to “set you straight,” but to deepen if I can your sense of wonder over things that are, as you recognize, filled with surprise!

If you in turn surprise me back by showing me that my solution to this tiny patch of the puzzle is also incomplete — I will be shocked (but not surprised again to find myself surprised), and once again grateful to you.

What a strange world!

But also what a sad situation the citizens of our democracy are in — to be in disagreement over such consequential things, and to feel motivated to react with resentment toward others who see things differently from them.

Maybe by indulging our curiosity, you and I and others will learn things that can be used to help the members of our culturally pluralistic society converge in their understandings of the best available evidence of the dangers we face and how to abate them.

Wednesday
Feb132013

Evidence-based Climate Science Communication (new paper!)

Here's a new paper. Comments welcome!

There are 2 primary motivations for this essay.

The first might be pretty obvious to people who have been able to observe organized planning and execution of climate-science communication first hand. If not, read between the lines in  the first few pages & you will get a sense.  

Frankly, it frustrates me to see how ad hoc the practice of climate-science communication is.  There's a weird disconnect here. People who are appropriately concerned to make public-policy deliberations reflect the best available scientific evicence don't pursue that goal scientifically.

The implicit philosophy that seems to animate planning and executing climate-science communication is "all opinions are created equal."

Well, sorry, no. All opinions are hypotheses or priors. And they can't all be equally valid. So figure out empirically how to identify the ones that are.

Indeed, take a look & see what's already been tested. It's progress to recognize that yesterday's plausible conjecture is today's deadend or false start. Perpetually recycling imaginative conjectures instead of updating based on evidence condemns the enterprise of informed communcation to perpetual wheelspinning.

My second motivation is to call attention to local adaptation as one of the field "laboroatories" in which informed conjectures should be tested.  Engagement with valid science there can help promote engagement with it generally.  Moreover, the need for engagement at the local level is urgent and will be no matter what else happens anyplace else.  We could end carbon emissions today, and people in vulnerable regions in the U.S. would still be facing significant adverse climate impacts for over 100 yrs.  The failure to act now, moreover, will magnify the cost-- in pain & in dollars -- that people in these regions will be needlessly forced to endure.

So let's get the empirical toolkits out, & go local (and national and international, too, just don't leave adaptation out).

Thursday
Feb072013

The declining authority of science? (Science of Science Communication course, Session 3)

This semester I'm teaching a course entitled the Science of Science Communication. I have posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this third such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

In Session 3, we finished off “science literacy and public attitudes” by looking at “public attitudes” toward science.  The theory for investigating the literature here is that one if one wants to understand the mechanisms by which scientific knowledge is transmitted in various settings, it likely is pretty important to consider how much value people attach to being informed of what science knows. 

1.  So what are we talking about here? I’m going to refer to the “authority of science” to mean assent to its distinctive understanding of “knowing” as valid and as superior to competing understandings (e.g., a religious one that treats as known matters revealed by the word of God, etc.). The relevant literature on “attitudes toward science” tries to assess the extent of the authority of science, including variation in it among different groups and over time.

Indeed, a dominant theme in this literature is the declining or contested status of the authority of science. “Many scholars and policy makers fear that public trust in organized science has declined or remains inadequate,” summarizes Gauchat, a leading researcher in this field. What accounts for that?

2. Well, what are they talking about? But before examining the explanations for the growing resistance to the authority of science, it’s useful to interrogate the premise: why exactly would anyone worry that the authority of science is seriously in doubt in American society? 

Pew did an amazingly thorough and informative survey in 2009 and concluded “Americans like science.” They “believe overwhelmingly that science has benefited society and has helped make life easier for most people.”

This sentiment, moreover, is pretty widespread. “Partisans largely agree on the beneficial effects of science,” the Pew Report continues, “with 88% of Republicans, 84% of independents and 83% of Democrats saying the impact is mostly positive. There are differences—though not large—tied to race, education, and income.”

“[L]arge percentages,” too, “think that government investments in basic scientific research (73%) and engineering and technology (74%) pay off in the long run.” Again, this is not something that generates meaningful political divisions.

Data collected over three decades' time by the NSF suggests that this 2009 picture from Pew is a but a frame in a thirty-year moving picture that shows -- well, a stationary object. Americans love science for all the wonderful things it does for them, want government to keep funding it, and have for decades.

 

Amusingly, the Pew Report seems to feel compelled to pay respect to the “declining authority” perception, even in the course of casting immense doubt on it.  The subtitle of the Report is “Scientific Achievements Less Prominent Than a Decade Ago.” The basis of this representation turns out to be a question that asked subjects to select the “Nation’s greatest achievement” from a specified list.  Whereas 47% picked “Science/medicine/technology” in 1999, only 27% did in 2009.  Most of the difference, though, was reflected in the 12 percentage point increase in “Civil rights/Equal rights,” and nearly all the rest in “Nothing/Don’t Know,” the only option chosen more often than Science/medicine/technology.”

A better subtitle, then, would have been “After Election of America’s First African-American President, Recognition of Gains in Civil Rights Eats Away at American’s Awe of Science.”

3.  Uncritically examined assumptions tend to multiply.... I keep mentioning the bipartisan or nonpartisan aspect of the public’s warm feeling toward science because my guess is that the premise that the authority of science is in “decline” is an inference from the sad spectacle of political polarization on climate change. If so, then this would be a case where the uncritical acceptance of one assumption--that conflict over climate change reflects a decline in the authority of science-- has bred uncritical acceptance of another--that the authority of science is declining.

I could sort of understand why someone might hypothesize that people who are skeptical about climate change don’t accept science’s way of knowing, but not why anyone would persist in this view after examining any reasonable amount of evidence. 

The people who are skeptical about climate change, just like those who believe in it, believe by an overwhelming margin that “scientists contribute to the well-being of society.”  The reason that there is public division on climate change is not that one side rejects scientific consensus but that the two disagree about what the “consensus” on climate change is, a conclusion supported by numerous studies including the Pew Report.

A related mistake is to treat the partisan divide on climate as evidence that “Republicans” are “anti-science.”  Not only do the vast majority of such individuals who identify as Republican view science and its impact on society positively. They also, as the Pew Report notes, hold views on nuclear power more in keeping with those of scientists (who are themselves overwhelmingly Democratic) than the vast majority of ordinary members of the public who call themselves “Democrats.”

Another probable basis for the ill-supported premise that science’s authority is low or in decline etc. is the high proportion of the U.S. population—close to 50%--who say they believe in divine creation.  In fact, the vast majority of those who say they don’t believe in evolution also have highly positive views about the value of science.

I suppose one could treat the failure to “accept” evolution (or to “believe” in climate change)  as “rejection” of the authority of science by definition. But that would be a boring thing to do, and also invite error.

It would be boring because it would foreclose investigation of the extremely interesting question of how people who hold one position they know is rejected by science can nevertheless persist in an extremely positive view of science in general -- and simply live in a manner that so pervasively assumes science’s way of knowing is the best one (I don’t know for sure but am pretty confident that people who believe in evolution are not likely to refuse to rely on a GPS system because its operation reflects Einstein’s theories on relativity, e.g.).

The error that's invited by equating rejection of evolution or climate change with “rejection of the authority of science” is the conclusion that the rejection of the authority of science causes those two beliefs.  Definitions, of course, don’t cause anything. So if we make the awkward choice to analytically equate rejection of evolution or of climate change with rejection of the authority of science, we will have to keep reminding ourselves that “rejection of the authority of science” would then be a fallacious answer to the question what really does cause differences in public beliefs about evolution and about climate change?

4.  But then what are the “public attitude” measures measuring? The public attitude scholars, and in particular Gauchat, report lots of interesting data on the influences on attitudes toward science.  The amount of variance they find, moreover, seems too large to be understood as an account for the difference between the 85% of Americans who seem to think science is great and the 15% or so who seem to have a different view. The question thus becomes, what exactly are they measuring and what’s its relationship to peoples’ disposition to be guided by science’s way of knowing on matters of consequences to their decisionmaking?

Literally what these scholars are measuring is variance in a composite scale of attitudinal Likert items that appear in the GSS and the NSF Science Indicators. The items consist of statements (with which respondents indicate their level of disagreement or agreement on a 5- or 7-point scale) like these 

  1. Because of science and technology, there will be more opportunities for the next generation.
  2. We depend too much on science and not enough on faith.
  3. Scientific research these days doesn’t pay enough attention to the moral values of society.
  4. Science makes our way of life change too fast.

I think these items are measuring something interesting, because Gauchat has found that they correlate in interesting ways with other individual characteristics.  One of these is an attitudinal dispositions that Gauchat calls “institutional alienation,” which measures trust in major institutions of government and civil society. They also correlate highly with science literacy.

But in truth, I’m not really sure what the disposition being measured by this type of “public science attitude” scale is. Because we know that in fact the public reports having high regard for science, a composite “science attitude” scale presumably is picking up something more general than that. I am unaware (maybe a reader of this blog will direct me to relevant literature) that attempts to validate the “science attitude” scale in relation to whether people are willing to rely on science in their lives—for example, in seeking medical treatment from physicians, or making use of safety-related technologies in their work, etc.  I would be surprised if that were so, given how unusual it is the US & other modern, liberal democratic socieites to see behavior that reflects genuine distrust for science’s authority. My guess is that the “public science attitudes” scales are measuring something akin to “anti-materialism” or “spiritualism.” Or maybe this is the elusive “fatalism” that haunts Douglas’s group-grid!

Indeed, I think Gauchat is interested in something more general than the “authority of science,” at least if we understand that to mean acceptance of science’s way of knowing as the best one.  He is looking for and likely finding pockets of American society that are unsatisfied with the meaning (or available meanings) of a life in which science’s authority is happily taken for granted by seemingly all cultural communities, even those for whom religion continues to furnish an important sentimental bond. 

For his purpose, though, he probably needs better measures than the ones that figure in the GSS and NSF batteries. I bet he’ll devise them. I suspect when he does, too, he’ll find they explain things that are more general than (& likely wholly unrelated to) partisan political disputes over issues like climate change.

Finally, in a very interesting paper, Gauchat examines variance in a GSS item that asks respondents to indicate how much “confidence” they have “in the people running . . . the Scientific Community”—“a great deal,” “only some,” or “hardly any.”  Gauchat reports finding that the correlation between identifying themselves as politically “conservative” and selecting “great deal” in response to this item has declined in the last 15 years. It’s interesting to note, though, that only about 50% of liberals have over time reported “a great deal” of confidence in “the people running . . . the Scientific Community,” and the individuals historically least likely to have a “great deal of trust” identify themselves as “moderates.”

I have blogged previously on this paper. I think the finding bears a number of possible interpretations. One is that Republicans have become genuinely less “confident” in the “people running the Scientific Community” during the period in which climate change has become more politically salient and divisive. Another is that climate skepticism is exactly what the GSS “confidence” item—or at least variance in it—is really measuring; it seems reasonable that conservatives might understand the (odd!) notion of “people running the Scientific Community” to be an allusion to climate scientists.  Gauchat’s finding thus points the way for additional interesting investigations.

But whatever this item is measuring, it is not plausibly understood as a measure of a general acceptance of the authority of science, at least if that concept is understood as assent to the superiority of science’s way of knowing over alternative ones.

Republicans continue to go to doctors and use microwave ovens—and continue to say, as they have for decades, that they admire scientists and science, no doubt because it furnishes them with benefits both vital and mundane. 

They don’t (for the most part) believe in climate change, and if they are religious they probably don’t believe in evolution (same for religious Democrats).

But that’s something that needs another, more more edifying explanation than “decline in the authority of science.”

Reading list

Wednesday
Feb062013

Yet another installment of: "I only *study* science communication ..." 

Man, I suck at communicating!

I’ve now received 913 messages (in addition to many many comments) from scientists saying  “I attended your recent presentation, and you did fine—everyone loved you. Seriously. Don’t jump – here’s a number to call for help.  Okay? Okay?”

I see exactly what happened, of course. Despite my intentions, I came across like whining, self-pitying baby, because I wrote something that made me sound like a whining, self-pitying baby!

Actually, the potential miscommunication I am most anxious to fix is any intimation that I felt the audience at the  North American Carbon Program meeting made me feel I wasn't playing a constructive role in the discussion.  Definitely no one did in Q&A.  And after, the comments from the many people who lingered to discuss consisted of "very interesting!" (n = 3)  "thanks for giving us something to think about," (n = 2)  & "[really interesting observation/question relating to the data & issues]” (n = 7). (Like I said in the talk, it is essential to collect data, and not just go on introspection, when assessing the impact of science communication strategies.)

The source of the disappointment was wholly internal.  Also—but please don’t take this as reason to console me; I’m fine!—I remain convinced it was warranted.  I have proof: interrogating the feeling has enabled me to learn something.

So let me try this again . . . .

Something astonishing and important happened on  Monday.

I got the opportunity to address a room full of scientists who, by showing up (& not leaving for 2 hrs!), by listening intently, by asking thoughtful questions, by sharing relevant experiences, and by offering reasonable proposals proved that they, like me, see fixing the science communication problem as one of the most pressing and urgent tasks facing our society.

Of course, I stand by my position (subject, forever, to revision in light of new evidence) on what the source of the problem is. Also, I am happy, but hardly surprised, to learn that members of the audience didn’t at all resent my registering disagreement when I felt doing so would serve the goal of steering them—us—clear of what I genuinely believe to be false starts and deadends.

What disappoints me is not that I felt obliged to say “no,” "I don't think so," and “not that.”

It is that I failed to come fully prepared to identify, for an audience of citizen scientists who afforded me the honor of asking for my views, what I believe they can do as scientists to help create a science communication environment in which diverse citizens can be expected to converge on the best available scientific evidence as they deliberate over how best to secure their common ends.

I said (in my last post), “the scientist’s job is to do science, not communicate it.”  I didn’t convey my meaning as clearly as I wish I had (because, you see, science communication is only a hobby for me; my job is to contribute to scientific understanding of it).

Of course, scientists “communicate” as part of their job in being scientists.  But that communication is professional; it is with other scientists. Their job is not to communicate  their science to nonexperts or members of the public.

This is a very critical point to get clear on so I will risk going on a bit. 

The mistake of thinking that doing valid science is the same as communicating the validity of valid science is what got us into the mess we are in! Communicating and doing are different; and the former is something that admits of and demands its own independent scientific investigation.

In addition, the expert use of the scientific knowledge that the study of science communication creates is something that requires professional training and skill suited to communicating science, not doing science. Expecting the scientist to communicate the validity of her science because she had the professional skill needed to generate it is like expecting the players in a major league baseball game to do radio play-by-play at the same time, and then write up sportspage accounts for the fans who couldn’t tune in.

Yes, yes, there’s Carl Sagan; he’s the Tim McCarver of science communication. For sure be Carl Sagan or better still Richard Feynman if you possibly can be, b/c as I said, if you can help me and other curious citizens to participate in the wonder of knowing what is known to science, you will be conferring an exquisite benefit of immeasureable intrinsic value on us! Still, that won’t solve the climate change impasse either.

But neglecting to add this was my real mistake: just because what you say in or about your job as a scientist won’t dispel controversy over climate change does not mean that it isn’t your duty as a citizen scientist to contribute to something only scientists are in a position to do and that is essential not only to dispelling controversy over climate science but to addressing what caused that controversy and numerous others (nuclear power . . . HPV vaccine), and that will continue to cause us to experience even more of the same (GM foods . . . synthetic biology) if not corrected.

The cause of the science communication problem is the disjunction between the science of science communication and the practice of science and science-informed policymaking.  We must integrate them—so that we can learn as much as we can about how to communicate science, and never fail to use as much as we know about how to make what’s known to science known by those whose well-being it can serve.

Coordinated, purposeful effort by the institutional and individual members of the scientific community are necessary to achieve this integration (not sufficient; but I’ll address what others must do in part 5,922 of this series of posts). That was the message—the meaning—of the National Academy of Science’s “Science of Science Communication” Sackler Colloquium last spring.

Universities are where both science and professional training of those whose skills are informed by science take place. Universities—individually and together—must organize themselves to assure that they contribute, then, to the production of knowledge and skill that our society needs here.

What does that mean? Not necessarily one thing (such as, say, a formal “science of science communication” program or whathaveou). But any of a large number of efforts that a university can make, if it proceeds in a considered and deliberate way, to make sure that its constituent parts (its various social science graduate departments, its professional schools, its interdisciplinary centers and whatnot) predictably, systematically interact in a manner that advances the integration of the forms of knowledge that must be combined.

So make this happen

Combine with others within your university and petition, administer, or agitate as necessary to get your institution both to understand and make its contribution to this mission in whatever way intelligent deliberation recommends.

Model it yourself by teaching—or better yet co-teaching with someone in another discipline that also should be integrated—a course called the “Science of Science Communication” that’s cross-listed in multiple relevant programs.

Infect a brilliant student or two or fifty with excitement and passion for contributing to the creation of the knowledge that we need—and do what you can to demonstrate that should they choose this path their scholarly excellence will be conferred the recognition it deserves (or at least won’t compromise their eligibility for tenure!).

Is that it? No other things that scientists can do? 

I’m sure there are others (to be taken up in later posts, certainly, I promise). But making their universities bear their share of the burden to contributing to the collective project of melding science and science-informed policymaking with the science of science communication is the single most important thing you can do as a scientist to solve the science communication problem.

But don’t stop doing your science, and just keep up the great work (no need to change how you talk) in that regard.

Okay. Next question?  

Tuesday
Feb052013

Another installment of: "I only study science communication -- I didn't say I could do it!" 

Gave a talk yesterday at the North American Carbon Program’s 2013 meeting, “The Next Decade of Carbon Cycle Research: From Understanding to Application.

Obviously, I would have been qualified to be on any number of panels (best fit would have been “Model-data Fusion: Integrated Data-Model Approaches to Carbon Cycle Research”), but opted to serve on “Communicating Our Science” one (slides here).

Bob Inglis (click to learn more!)The highlights for me were the excellent presentations by Jeff Kiehl, an NCAR scientist who has really mastered the art of communicating complicated and controversial science to diverse audiences, and former Rep. Bob Inglis, who now heads up the Energy & Enterprise Institute, a group that advocates using market mechanisms rather than centralized regulation to manage carbon emissions. I also learned a lot from the question/answer period, where scientists related their experiences, insights, & concerns.

To be honest, I’m unsure that I played a constructive role at all on the panel, & I’ve been pondering this.

The theme of my talk was “the need for evidence-based science communication.”  I stressed the importance of proceeding scientifically in making use of the knowledge that the science of science communicate generates. Don't use that knowledge to construct stories; use it to formulate hypotheses about what sort of communication strategy is likely to work -- and then measure the impact of that strategy, generating information that you & others can use to revise and refine our common understanding of what works and what doesn't.

I'm happy w/ what I had to say about all of this, but here's why I’m not really sure it was useful:

1.  I don’t think I was telling the audience what they wanted to know. These were climate scientists, and basically they were eager to figure out how they could communicate their science more effectively.

My message was one aimed, really, at a different audience, those whom  I think of as “science communication practitioners.”  Like Bob Inglis, who is trying to dispel the fog of accumulated ideological resonances that he believes obscures from citizens who distrust government regulation the role that  market mechanisms can play in reducing climate-change risks. Or Jeff Kiehl, who is trying to figure out how to remove from the science communication environment the toxic partisan meanings that disable the rational faculties that citizens typically use to figure out what is known to science.  Or municipal officials and others who are trying to enable parties in stakeholder deliberations on adaptation in Florida and elsewhere to make collective decisions informed by the best available science.

2.  Indeed, I think I told the audience a number of things its members actually didn’t want to hear. One was that it’s almost certainly a mistake to think that how scientists themselves communicate their science will have much impact on the quality of public engagement with climate science.

For the most part, ordinary members of the public don’t learn what is known to science from scientists. They learn it from interactions with lots of other nonscientists (typically, too, ones who share their values) in environments that are rich with cues that identify and certify what’s collectively known.

There’s not any meaningful cultural polarization in the U.S., for example, over pasteurization of milk. That’s not because biologists do a better job explaining their science than climate scientists have done explaining theirs. It’s because the diverse communities in which people learn who knows what about what are reliably steering their members toward the best available scientific evidence on this issue—as they are on a countless number of other ones of consequence to their lives.

Those communities aren’t doing that on climate change because opposing positions on that issue have come to be seen as badges of loyalty to opposing cultural groups. It’s possible, I think to change that.  But the strategies that might accomplish that goal have nothing to do with the graphic representations (or words) scientists use for conveying the uncertainty associated with climate-model estimates.

I also felt impelled to disagree with the premises of various other genuinely thoughtful questions posed by the audience. E.g., that certain groups in the public are skeptical of climate change because it threatens their “interests” or lifestyle as affluent consumers of goods associated with a fossil-fuel driven economy. In fact (I pointed out), wealth in itself doesn’t dispose people to downplay climate change risks; it magnifies the polarization of people with different values

Maybe I was being obnoxious to point this out. But I think scientists should want their views about public understandings of science to accord with empirical evidence.

I also think it is important to remind them that if they make a claim about how the public thinks, they are making an empirical claim. They might be right or they might be wrong. But personal observation and introspection aren’t the best ways to figure that out; the sort of disciplined observation, measurement, and inference that they themselves use in their own domain are.

Shrugging one's shoulders and letting empirically unsupported or contestable claims go by unremarked amounts to accepting that a discussion of science communication will itself proceed in an unscientific way.

Finally, I felt constrained to point that ordinary citizens who have the cultural identity most strongly associated with climate-change skepticism actually aren’t anti-science.

They love nanotechnology, e.g.

They have views about nuclear power that are more in keeping with “scientific consensus” (using the NAS reports as a benchmark) than those who have a recognizable identity or style associated with climate change concern.

If you want to break the ice, so to speak, in initiating a conversation with one of them about climate science, you might casually toss out that the National Academy of Sciences and the Royal Society have both called more research on geoengineering. “You don’t say,” he’s likely to respond.

Now why’d I do this? My sense is that the experience with cultural conflict over climate change has given a lot of scientists the view that people are culturally divided about them.  That’s an incorrect view—a non-evidence-based one (more on that soon, when I write up my synthesis of Session 3 of the Science of Science Communication course). 

It’s also a misunderstanding that I’m worried could easily breed a real division between scientists and the public if not corrected. Hostility tends to be reciprocated. 

It's also sad for people who are doing such exciting and worthwhile work to labor under the false impression that they aren't appreciated (revered, in fact).

3.  Finally,  I think I also created the impression that what I was saying was in tension with the great advice they were getting from the one panelist most directly addressing their central interest.

I’d say Jeff Kiehl was addressing the question that members of the audience most wanted to get the answer to: how should a climate scientist communicate with the public in order to promote comprehension and open-minded engagement with climate science?

Jeff talked about the importance of affect in how people form perceptions of risk.  The work of Paul Slovic, on whom Jeff was relying, 100% bears him out.

In my talk, I was critical of the claim that the affect-poor quality of climate risks relative, say, to terrorism risks, explains why the public isn’t as concerned about climate change as climate scientists think they should be. 

That’s a plausible conjecture; but I think it isn’t supported by the best evidence. If it were true, then people would generally be apathetic about climate change. They aren’t; they are polarized.

It’s true that affective evaluations of risk sources mediate people’s perceptions of risk. But those affective response are the ones that their cultural worldviews attach to those risk sources.  Super scientist of science communication Ellen Peters has done a kick ass study on this!

What’s more, as I pointed out in my talk, people who rely more on “System 2” reasoning (“slow, deliberate, dispassionate”) are more polarized than those who rely predominantly on affect-driven system 1.

But this is a point, again, addressed to communication professionals: the source of public controversy on climate change is the antagonistic cultural meanings that have become attached to it, not a deficit in public rationality; dispelling the conflict requires dissipating those meanings—not identifying some magic-bullet “affective image.”

What Kiehl had to say was the right point to make to a scientist who is going to talk to ordinary people.  If that scientist doesn’t know (and she might well not!) that ordinary members of the public tend to engage scientific information affectively, she will likely come off as obtuse!

What’s more, nothing in what I had to say about the limited consequence of what scientists say for public controversy over climate change implies that scientists shouldn’t be explaining their science to ordinary people, and doing so in the most comprehensible, and engaging way possible.

Lots of ordinary people want to know what the scientists do. In the Liberal Republic of Science, they have a right to have that appetite—that curiosity—satisfied!

For the most part, performing this critical function falls on the science journalist, whose professional craft is to enable ordinary members of the public to participate in the thrill and wonder of knowing what is known to science.

Secondary school science teachers, too: they inculcate exactly that wonder and curiosity, and wilily slip scientific habits of mind in under the cover of enchantment!

The scientist’s job is to do science, not communicate it.

But any one of them who out of public spiritedness contributes to the good of making it possible for curious people to share in the knowledge of what she knows is a virtuous citizen.

Regardless of whether what she's doing when she communicates with the public contributes to dispelling conflict over climate change.

Friday
Feb012013

Cultural cognition & cat-risk perceptions: Who sees what & why?

So like billions of others, I fixated on this news report yesterday:

Obvious fake! These are professional-model animals posing for staged picture. Shame on you, NYT!For all the adorable images of cats that play the piano, flush the toilet, mew melodiously and find their way back home over hundreds of miles, scientists have identified a shocking new truth: cats are far deadlier than anyone realized.

In report that scaled up local surveys and pilot studies to national dimensions, scientists from the Smithsonian Conservation Biology Institute and the Fish and Wildlife Service estimated that domestic cats in the United States — both the pet Fluffies that spend part of the day outdoors and the unnamed strays and ferals that never leave it — kill a median of 2.4 billion birds and 12.3 billion mammals a year, most of them native mammals like shrews, chipmunks and voles rather than introduced pests like the Norway rat.

The estimated kill rates are two to four times higher than mortality figures previously bandied about, and position the domestic cat as one of the single greatest human-linked threats to wildlife in the nation. More birds and mammals die at the mouths of cats, the report said, than from automobile strikes, pesticides and poisons, collisions with skyscrapers and windmills and other so-called anthropogenic causes.

My instant reaction (on G+) was: bull shit!

My confidence that I knew all the facts here -- and that the study, published in Nature Communicationswas complete trash and almost surely conducted by researchers in the pocket of the bird-feed industry -- was based on my recollection of some research I’d done on this issue a few yrs ago (I’m sure in response to a rant against cats and bird “genocide” etc.). I recalled that there was "scientific consensus" that domestic cats have no net impact on wildlife populations in the communities that people actually inhabit (yes, if you put them on an island in the middle of the Pacific Ocean, they'll wipe out an indigenous species or two or twelve).  But I figured (after posting, of course) that I should read up and see if there was any more recent research.

What I found, unsurprisingly, is either there is no scientific consensus on the net impact of cats on wildlife populations or there is no possibility any reasonable and intelligent nonexpert could confidently discern what that consensus is through the fog of cultural conflict!

Check this out:

And this:

This is definitely a job for the science of science communication!

So I’d like is some help in forming hypotheses.  E.g.,

1.  What are the most likely mechanisms that explain variance in who perceives what and why about the impact of cats on wildlife population? Obviously, I suspect motivated reasoning: people (myself included, it appears!) are conforming their perceptions of the evidence (what they read in newspapers or in journals; what they “see with their own eyes,” etc.) to some goal or interest or value extrinsic to forming an accurate judgment. But what are the other plausible mechanisms?  Might people be forming perceptions based on exogenous “biased sampling”—systematically uneven exposure to opposing forms of information arising from some influence that doesn't itself originate in any conscious or unconscious motivation to form or preserve a particular belief  (e.g., whether they live in the city or country)? Something else? What sorts of tests would yield evidence that helps to figure out the relative likelihood of the competing explanations?

2.  Assuming motivated reasoning explains the dissensus here, is the motivating influence the dispositoins that inform the cultural cognition framework? How might perceptions of the net impact of cats on wildlife populations be distributed across the Hierarchy-egalitarian and Individualist-communitarian worldview dimensions?  Why would they be distributed that way

3.  Another way to put the last set of questions: Is there likely to be any relationship between who sees what and why about the impact of cats on wildlife population and perceptions of climate change risks? Of gun risks? Of whether childhood vaccinations cause autism? Of whether Ray Lewis consumed HGH-laced deer antler residue?

4.  If the explanation is motivated reasoning of a sort not founded on the dispositions that inform the cultural cognition framework, then what are the motivating dispositions? How would one describe those dispositions, conceptually? How would one measure them (i.e., what would the observable indicators be)?

Well? Conjectures, please -- on these or any other interesting questions.

By the way, if you'd like to see a decent literature review, try this:

Barbara Fougere, Cats and wildlife in the urban environment.

 

Thursday
Jan312013

Groping the political economy elephant ... 

I was going to write an indignant post about pollution of the science communication environment by pseudo-scientists who obviously have an unreasoning cultural bias against cats, but then I read a reflective comment on my recent post on “what to advise climate science communicators."

The post comes from a thoughtful guy named Gene, who says: 

Yes, that’s the “political economy” elephant. You are right, Gene, we can't ignore it.

But let me tell you what I feel as I grope it. For sure, it's the same animal, but the texture and shape seem a bit different from what I think you are sensing. I invite others to have a go and describe what the beast feels like to them.

1.  Science communication's "political economy problem": big but how big?  

Basically, even if we had a perfect scientific understanding of how ordinary citizens make sense of scientific information – including the resources in the “science communication environment” that must be protected to assure that people are able reliably to use their rational faculties for discerning what’s known to science—there’d still be various groups and constituencies (of diverse cultural identities, and across a wide range of issues) with a stake in confusing people.

Indeed, they’d certainly know just as much about the science of science communication as those who want to use it to enhance enlightened self-government.  The science of science communication is nonproprietary, a product of the free and open exchange that is the driving engine of scientific discovery. So the bad guys can help themselves to it (they can also try to gain an edge by doing their own research, which they of course won't share, but the proprietary-knowledge producers are so dimwitted compared to the open- that we can safely ignore that detail).

Accordingly, if there is no way to constrain these actors from polluting the science communication environment, then all the knowledge associated with the new science of science of communication would be of “academic interest” only.

This is a big big problem. But I think there are a few mistakes that people tend to make that can exaggerate their perception of its magnitude, and thus risk either paralyzing or simply misdirecting those in a position to try to deal with this difficulty.

2. People overestimate the significance of misinformation.

It’s true that groups seeking deliberately to misrepresent scientific evidence contribute to disputes like the ones over climate change, nuclear power, the HPV vaccine, etc.

But the "science miscommunicators" are actually not the cause of the problem; they are a symptom of it.

The cause is a science communication environment polluted by the entanglement of risks and policy-relevant facts with toxic partisan meanings.

In that environment, ordinary people, through dynamics of cultural cognition, will aggressively misinform themselves. Even when given accurate information, they will construe it in biased ways, and thus become even more polarized.

In that environment, it will indeed be very feasible and very profitable to supply people with misinformation, because people will eagerly seek out and latch onto anything that serves their interest in maintaining identity-protective beliefs.  Satisfying this demand for misinformation will certainly make things even worse.

But the problem started earlier: when the issue in question became charged with antagonistic cultural meanings.

3. People underestimate the contribution that accident and misadventure make to polluting the science communication environment & hence the degree to which it can be avoided by a “scicom environmental protection” policy.

If they know what they are doing, the groups who recognize that they can profit from public conflict and confusion over science are going to see misleading people on facts as secondary in importance to manufacturing and disseminating cues that incline people (unconsciously, in most instances) to see particular issues—like genetically modified foods, say—as ones that pit opposing cultural groups against each other. If they can get that impression to take hold, then they can be sure that the dissemination of valid information will never really be effective in countering misinformation.  

But it is also easy to overestimate the contribution that this sort of strategic behavior makes to polluting the science communication environment.  Other factors that can be very very consequential fall into the categories of accident and misadventure.  There was plenty of accident and misadventure on climate change, including forms of communication by climate change advocates that reinforced the public impression that the issue was a cultural “us vs. them” dispute.

Accident and misadventure both contaminate the science communication environment, and make it easier for strategically minded polluters to succeed thereafter.

But we can avoid accidents and misadventures by becoming smart, and by behaving intelligently. That’s what the science of science communication is all about.

Want an example? Check out the HPV vaccine risk case study from my Science of Science Communication course.

4. Taking the “bad political economy” as given foolishly ignores opportunities to create offsetting “good political economy” forces that can restore the quality of the science communicating environment.

This was a point I made in the original post. It’s hard enough to decontaminate a toxic science communication environment, but the prospects for doing so when one has to compete with polluters is even more bleak.

But one response is to find science communication environments that aren’t already filled with pollution—and not only concentrate efforts to communicate there, but also figure out & then do what’s necessary to keep them that way.  I’ve written already about why I believe political activity at the local-level focusing on adaptation makes sense for these reasons.

But another reason it makes sense for science communicators to try to play a constructive role in local adaptation is that the deliberations going on in states like Florida, Arizona, West Virginia, Louisiana, N. & S. Carolina et al. involve a completely different alignment of interests than the national debate over reducing CO2 emissions.  Utility companies, local businesses, ordinary homeowners, municipal actors—all know they have a common stake in making their communities as resilient as they can be.  

What to do—that’s not something they will all agree on, of course. There are different possibilities, all of which with their own constellation of costs and benefits, the distribution of which also vary.

But all of these actors do want the scientific facts and do want their representatives—including their municipal leaders, their state government officials, and their congressional delegations to get them the resources they need to take smart, cost-effective action based on that scientific evidence.  

This conversation is super important. 

It’s super important not only because it affects the well-being of these communities (which climate scientists believe are likely to face significant climate-impact risks for decades to come no matter what the U.S. or any other nation does to reduce CO2 emissions).

It's also super important because the organized political activity that it involves has the potential to produce new, highly influential, intenesly interested and well-organized political constituencies whose stake in sober, informed engagement with evidence can help to counteract the influence of other constituencies (whatever side of the debate they might be on) who have a stake in confusing and distracting reflective citizens.

5. In the Liberal Republic of Science, science journalists will also contribute to containing the elephant through perfection of craft norms that censure members of their profession who aid and abet scicom environment polluters.

Check out what they did on GM Foods during the California Prop. 37 debate.  They are modeling what many other actors—from universities to foundations to scientific associations to government institutions—need to do to organize themselves in a way that takes seriously the obligation they have to protect the quality of science communication environment.

6. But all the same, the political economy problem is a huge one for the quality of the science communication environment; the “New Political Science” for the Liberal Republic of Science desperately needs some intelligence here.

But look, notwithstanding all of this, the elephant really is there, and Gene is right that we can’t ignore it.  That elephant, again, is the constraint that political economy forces will always exert on the enlightened use of the knowledge associated with the science of science communication.

The only way to tame that elephant . . . actually, this has become a bad metaphor; elephants are really nice animals. Let’s try again:

The only way to inoculate the body politic of the Liberal Republic of Science against the virus that these foreseeable political economy dynamics represent is with applied intelligence. 

The science of science communication is the new political science for an age in which democracy faces a challenge that is itself quite new: to protect at one and the same time the interest their citizens have in using the best available scientific knowledge to advance their common good and the right they are guaranteed to meaningfully govern themselves.

That science is going to require perfection of our understanding of how the political economy of democratic states influences science communication every bit as much as it will require us to perfect our understanding of the social psychology of transmitting scientific knowledge.

Wednesday
Jan302013

Respond to commentary day

I've allotted my daily blogging time to reading the many interesting comments addressing yesterday's "What to advise communicators of climate science?" post, and responding to some.  Nothing I could say would be as insightful as those anyway! So read them, and please add your own views.

Tuesday
Jan292013

What would I advise climate science communicators?

This is what I was asked by a thoughtful person who is assisting climate-science communicators to develop strategies for helping the public to recognize the best available evidence--so that those citizens can themselves make meaningful decisions about what policy responses best fit their values.  I thought others might benefit from seeing my responses, and from seeing alternative or supplementary ones that the billions of thoughtful people who read this blog religiously (most, I'm told, before they even get out of bed everyday) might contribute. 

So below are the person's questions (more or less) and my responses, and I welcome others to offer their own reactions.

1. What is the most important influence or condition affecting the efficacy of science communication relating to climate change?

In my view, “the quality of the science communication environment” is the single most important factor determining how readily ordinary people will recognize the best available evidence on climate change and what its implications are for policy. That’s the most important factor determining how readily they will recognize the best available scientific evidence relevant to all manner of decisions they make in their capacity as consumers, parents, citizens—you name it.

People are remarkably good at figuring out who knows what about what. That is the special rational capacity that makes it possible for them to make reliable use of so much more scientific knowledge than they could realistically be expected to understand in a technical sense.

The “science communication environment” consists of all the normal, and normally reliable, signs and processes that people use to figure out what is known to science. Most of these signs and processes are bound up with normal interactions inside communities whose members share basic outlooks on life. There are lots of different communities of that sort in our society, but usually they all steer their respective members toward what science knows.

But when positions on a fact that admits of scientific investigation  (“is the earth heating up?”; “does the HPV vaccine promote unsafe sex among teenage girls?”) becomes entangled with the values and outlooks of diverse communities—and becomes, in effect, a symbol of one’s membership and loyalty in one or another group—then people in those groups will end up in states of persistent disagreement and confusion. These sorts of entanglements (and the influences that cause them) are in effect a form of pollution in the science communication environment, one that disables people from reliably discerning what is known to science.

The science communication environment is filled with these sorts of toxins on climate change. We need to use our intelligence to figure out how to clean our science communication environment up.

For more on these themes:

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

2. If you had three pieces of advice for those who are interested in promoting more constructive engagement with climate change science, what would they be?

A. Information about climate change should be communicated to people in the setting that is
     most 
conducive to their open-minded and engaged assessment of it.  

 How readily and open-mindedly people will engage scientific information depends very decisively on context. A person who hears about the HPV vaccine when she sees Michelle Bachman or Ellen Goodman screaming about it on Fox or MSNBC will engage it as someone who has a political identity and is trying to figure out which position “matches” it; that same person, when she gets the information from her daughter’s pediatrician, will engage it as a parent, whose child’s welfare is the most important thing in the world to her, and who will earnestly try to figure out what those who are experts on health have to say. Most of the contexts in which people are thinking about climate change today are like the first of these two. Find ones that are more like the second. They exist!

B. Science communication should be evidence-based “all the way down.” 

The number of communication strategies that plausibly might work far exceeds the number that actually will.  So don’t just guess or introspect, & don't listen to story-tellers who weave social science mechanisms into ad hoc (and usually uselessly general) "how to" instructions!

Start with existing evidence (including empirical studies) to identify the mechanisms of communication that there is reason to believe are of consequence in the setting in which you are communicating.

But don’t guess on the basis of those, either, about what to do; treat insights about how to harness those mechanisms in concrete contexts as hypotheses that themselves admit of, and demand, testing designed to help corroborate their likely effectiveness and to calibrate them.

Finally, observe, measure, and report the actual effect of strategies you use. Think how much benefit you would have gotten, in trying to decide what to do now, if you had had access to meaningful data relating to the impact (effective or not) of all things people have already tried in the area of climate science communication. Think what a shame it would be if you fail to collect and make available to others who will be in your situation usuable information about the effects of your efforts.

Aiding and abetting entropy is a crime in the Liberal Republic of Science!

C. Don’t either ignore or take as a given the current political economy surrounding climate
      change; instead, engage people in ways that will improve it. 

Public opinion does not by itself determine what policies are adopted in a democratic system. If “public approval” were all that mattered, we’d have adopted gun control laws in the 1970s stricter than the ones President Obama is now proposing; we’d have a muscular regime of campaign finance regulation; and we wouldn’t have subsidies for agriculture and oil producers, or tax loopholes that enable Fortune 500 companies to pay (literally) zero income tax.

 The “political economy climate” is as complex as the natural climate, and public opinion is only one (small) factor. So if you make “increasing public support” your sole goal, you are making a big mistake.

You also are likely making a mistake if you take as a given the existing political economy dynamics that constrain governmental responsiveness to evidence and simply try to amass some huge counterforce (grounded in public opinion or otherwise) to overcome them. That’s a mistake, in my view, because there are things that can be done to engage people in a way that will make the political economy forces climate-change science communicators have to negotiate more favorable to considered forms of policymaking (whatever they might be).

Where to engage the public, how, and about what in order to improve the political economy surrounding climate change are all matters of debate, of course. So you should consult all the evidence, and all the people who have evidence-informed views, and make the best judgment possible. And anyone who doesn’t tell you that this is the thing to do is someone whose understanding of what needs to be done should be seriously questioned.

Monday
Jan282013

Measuring "Ordinary Science Intelligence" (Science of Science Communication Course, Session 2)

This semester I'm teaching a course entitled the Science of Science Communication. Posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

 In Session 2 (i.e., our 2nd class meeting) we started the topic of “science literacy and public attitudes.” We (more or less) got through “science literacy”; “Public attitudes” will be our focus in Session 3.

As I conceptualize it, this topic is in nature of foundation laying. The aim of the course is to form an understanding of the dynamics of science communication distinctive of a variety of discrete domains. In every one of them, however, effective communication will presumably need to be informed by what people know about science, how they come to know it, and by what value they attach to science’s distinctive way of knowing . So we start with those.

By way of synthesis of the readings and the “live course” (as opposed not to “dead” but “on line”) discussion of them, I will address these points: (1) measuring “ordinary science intelligence”—what & why; (2) “ordinary science intelligence” & civic competence; (3) “ordinary science intelligence” & evolution; and (4) “ordinary science intelligence” as an intrinsic good.

1. “Ordinary science intelligence” (OSI): what is being measured & why?

There are many strategies that could be, and are, used to measure what people know about science and whether their reasoning conforms to scientific modes of attaining knowledge. To my mind at least, “science literacy” seems to conjure up a picture of only one such strategy—more or less an inventory check against a stock of specified items of factual and conceptual information. To avoid permitting terminology to short circuit reflection about what the best measurement strategy is, I am going to talk instead of ways of measuring ordinary science intelligence (“OSI”), which I will use to signify a nonexpert competence in, and facility with, scientific knowledge.

I anticipate that a thoughtful person (like you; why else would you have read even this much of a post on a topic like this?) will find this formulation question-begging. A “nonexpert competence in and facility with scientific knowledge? What do you mean by that?”

Exactly. The question-begging nature of it is another thing I like about OSI. The picture that “science literacy” conjures up not only tends to crowd out consideration of alternative strategies of measurement; it also risks stifling reflection on what it is that we want to measure and why. If we just start off assuming that we are supposed to be taking an inventory, then it seems natural to focus on being sure we start with a complete list of essential facts and methods.  But if we do that without really having formed a clear understanding of what we are measuring and why, then we’ll have no confident basis for evaluating the quality of such a list—because in fact we’ll have no confident basis for believing that any list of essential items can validly measure what we are interested in.

If you are asking “what in the world do you mean by ordinary science intelligence?” then you are in fact putting first things first. Am I--are we--trying to figure out whether someone will engage scientific knowledge in a way that assures the decisions she makes about her personal welfare will be informed by the best available evidence? Or that she’ll be able competently to perform various professional tasks (designing computer software, practicing medicine or law, etc.)? Or maybe to perform civic ones—such as voting in democratic elections? If so, what sort of science intelligence do each of those things really require? What’s the evidence for believeing that? And what sort of evidence can we use to be sure that the disposition being measured really is the one we think is necessary?

If those issues are not first resolved, then constructing and assessing measures of ordinary scientific intelligence will be aimless and unmotivated. They will also, in these circumstances, be vulnerable to entanglement in unspecified normative objects that really ought to be made explicit, so that their merits and their relationship to science intelligence can be reflectively addressed.

2. Ordinary science intelligence and civic competence

Jon Miller has done the most outstanding work in this area, so we used his self-proclaimed “what and why” to help shape our assessment of alternative measures of OSI.  Miller’s interest is civic competence. The “number and importance of public policy issues involving science or technology,” he forecasts, “will increase, and increase markedly” in coming decades as society confronts the “biotechnology revolution,” the “transition from fossil-based energy systems to renewable energy sources,” and the “continuing deterioration of the Earth’s environment.” The “long-term healthy of democracy,” he maintains, thus depends on “the proportion of citizens who are sufficiently scientifically literate to participate in the resolution of” such issues.

We appraised two strategies for measuring OSI with regard to this objective. One was Miller’s “civic science literacy” measure. In the style of an inventory, Miller’s measure consists of two scales, the first consisting largely of key fact items (“Antibiotics kills viruses as well as bacteria [true-false]”; “Doest the Earth go around the Sun, or the Sun go around the Earth?”), and the latter at recognition of signature scientific methods, such as controlled experimentation (he treats the two as separate dimensions, but they are strongly correlated: r = 0.86). Miller’s fact items form the core of the National Science Foundation’s “Science Indicators,” a measure of “science literacy” that is standard among scholars in this field. Based on rough-and-ready cutoffs, Miller estimates that only 12% of U.S. citizens qualify as fully “scientifically literate” and that 63% are “scientifically illiterate”; Europeans do even worse (5%, and 73%, respectively).

The second strategy for measuring OSI evaluates what might be called “scientific habits of mind.” The reason to call it that is that it draws inspiration from John Dewey, who famously opposed a style of science education that consists in the “accumulation of ready-made material,” in the form of canonical facts and standard “physical manipulations.” In its place, he proposed a conception of science education that imparts “a mode of intelligent practice, an habitual disposition of mind” that conforms to science’s distinctive understanding of the “ways by which anything is entitled to be called knowledge.”

There is no standard test (as far as I know!) for measuring this disposition. But there are various “reflective reasoning” measures--"Cognitive Reflection Test" (Frederick), "Numeracy" (LipkusPeters), "Actively Open Minded Thinking" (Baron, & Stanovich & West), "Lawson's Classroom Test of Scientific Reasoning"-- that are understood to assess how readily people credit, and how reliably they make active use of, the styles of empirical observation, measurement, and inference (deductive and inductive) that are viewed as scientifically valid.

The measures used for "science literacy" and "scientific habits of mind" strike me as obviously useful for many things. But it’s not obvious to me that either of them is especially suited for assessing civic competence. 

Miller’s superb work is focused on internally validating the “civic scientific literacy” measures, not externally validating them. Neither he nor others (as far as I know; anyone who knows otherwise, please speak up!) has collected any data to determine whether his “cut offs” for classifying people as “literate” or “illiterate” predicts how well or poorly they’ll function in any tasks that relate to democratic citizenship, much less that they do so better than more familiar benchmarks of educational attainment (high-school diploma and college degrees, standardized test scores, etc.). Here's a nice project for someone to carry out, then.

The various “reflective reasoning” measures that one might view as candidates for Dewey’s “habit of mind” conception of OSI have all been thoroughly vetted—but only as predictors of educational aptitude and reasoning quality generally. But they also have not been studied in any systematic ways as markers of civic aptitude.

Indeed, there is at least one study that suggests that neither Miller’s “civic science literacy” measures nor the ones associated with the “scientific habits of mind” conceptioin of OSI predict quality of civic engagement with what is arguably the most important science-informed policy issue now confronting our democracy: climate change. Performed by CCP, the study in question examined science comprehension and climate-change risk perceptions. It found that public conflict over the risks posed by climate change does not abate as science literacy, measured with the “NSF science indicator” items at the core of Miller’s “civic science literacy” index, and reflective reasoning skill, as measured with numeracy, increase. On the contrary, such controversy intensifies: cultural polarization among those with the highest OSI measured in this way is significantly greater than polarization among those with the lowest OSI.

We also discussed one more conception of OSI: call it the “science recognition faculty”.  If they want to live good lives—or even just live—people, including scientists, must accept as known by science many more things then they can possibly comprehend in a meaningful way. It follows that their well-being will thus depend on their capacity to be able to recognize what is known to science independently of being able to verify that, or understand how, science knows what it does. “Science recognition faculty” refers to that capacity.

There are no measures of it, as far as I know. It would be fun to develop some.

But my guess is that it’s unlikely any generalized deficiency in citizens’ science recognition faculty explains political conflicts over climate change, or other policy issues that turn on science, either.  The reason is that most people most of the time recognize without difficultly what is known to science on billions & billions of things of consequence to their life (e.g., “who knows how to make me better if I’m ill?”; “will flying on an airplane get me where I want to go? How about following a GPS?”; “should parents be required to get their children vaccinated against polio?”).

There is, then, something peculiar about the class of conflicts over policy-relevant science that interferes with people’s science recognition faculty. We should figure out what that thing is & protect ourselves—protect our science communication environment—from it. 

Or at least that is how it appears to me now, based on my assessment of the best available evidence.

3. Ordinary science intelligence and “belief” in evolution

Perhaps one thinks that what should be measured is a disposition to assent to the best scientific understanding of evolution—i.e., the modern synthesis, which consists in the mechanisms of genetic variance, random mutation, and natural selection. If so, then none of the measures of OSI seems to be getting at the right thing either.

The NSF’s “science indicators” battery includes the question “Human beings, as we know them today, developed from earlier species of animals (true or false).” Typically, around 50% select the correct answer (“true,” for those of you playing along at home).

In 2010, a huge controversy erupted when the NSF decided to remove this question and another—“The universe began with a huge explosion”; only around 40% tend to answer this question correctly—from its science literacy scale.  The decision was derided as a “political” cave-in to the “religious right.”

But in fact, whether to include the “evolution” and “big bang” questions in the NSF scale depends on an important conceptual and normative judgment. One can design an OSI scale to be either an “essential knowledge” quiz or a valid and reliable measurement of some unobservable disposition or aptitude. In the former case, all one cares about is including the right questions and determining how many a respondent answered correctly. But in the latter case, correct responses must be highly correlated across the various items; items the responses to which don’t cohere with one another necessarily aren’t measuring the same thing.  If one wants to test hypotheses about how OSI affects individuals’ decisions—whether as citizens, consumers, parents or whathaveyou—then a scale that is merely a quiz and not a valid and reliable latent-variable measure will be of no use: if responses are randomly correlated, then necessarily the aggregate “score” will be randomly connected to anything else respondents do or say.  It is to avoid this result that scholars like Jon Miller have (very appropriately, and with tremendous skill) focused attention on the psychometric properties of the scales formed by varying combinations of science-knowledge items.

Well, if one is trying to form a valid and reliable measure of OSI, the “evolution” and “big bang” questions just don’t belong in the NSF scale. The NSF keeps track of how the top-tier of test-takers—those who score in the top 25% overall—have done on each question. Those top-scoring test takers have answered correctly 97% of the time when responding to “All radioactivity is man-made (true-false)”; 92% of the time when assessing whether “Electrons are smaller than atoms (true-false)”; 90% of the time when assessing whether “Lasers work by focusing sound waves (true-false)”; and 98% of the time when assessing whether “The center of the Earth is very hot (true-false).” But on “evolution” and “big bang,” those same respondents have selected the correct response only 55% and 62% of the time. 

That discrepancy is strong evidence that the latter two questions simply aren’t measuring the same thing as the others. Indeed, scholars who have used the appropriate psychometric tools have concluded that “evolution” and “big bang” are measuring respondents’ religiosity. Moreover, insofar as the respondents who tend to answer the remaining items correctly a very high percentage of the time are highly divided on “evolution” and “big bang,” it can be inferred that OSI, as measured by the remaining items in the NSF scale, just doesn’t predict a disposition to accept the standard scientific accounts of the formation of the universe and the history of life on Earth.

The same is true, apparently, for valid measures of the “habit of mind” conception of OSI.  In general, there is no correlation between “believing” in the best scientific account of evolution and understanding it at even a very basic level. That is, those who say they “believe” in evolution are no more likely than those who say they believe in divine “creation” to know what genetic variance, random mutation, and natural selection mean and how they work within the modern synthesis framework.  How well one scores on a “scientific habit of mind” OSI scale—one that measures one’s disposition to form logical and valid inferences on the basis of observation and measurement—does predict both one’s understanding of the modern synthesis and one’s aptitude for being able to learn it when it is presented in a science course.  But even when they use their highly developed “scientific habits of mind” disposition to gain a correct comprehension of evolution, individuals who commence such a course “believing” in divine creation don’t “change their mind” or abandon their belief.

It is commonplace to cite the relatively high percentage of Americans who say they believe in divine creation as evidence of “low” science literacy or poor science education in the U.S. But ironically, this criticism reflects a poor scientific understanding of the relationship between various measures of science comprehension and beliefs in evolution.

4. Ordinary science intelligence as an intrinsic good

Does all this mean OSI—or at least the “science literacy” and “habits of mind” strategies for measuring it—are unimportant? It could only conceivably mean that if one thought that the sole point of promoting OSI was to make citizens form a particular view on issues like climate change or to make them assent to and not merely comprehend scientific propositions that offend their religious convictions.

To me, it is inconceivable that the value of promoting the capacity to comprehend and participate in scientific knowledge and thought depends on the contribution doing so makes to those goals. It is far from inconceivable that enhancing the public’s OSI (as defensibly defined and appropriately measured) would improve individual and collective decisionmaking.  But I don’t accept that OSI must attain that or any other goal to be worthy of being promoted. It is intrinsically valuable. Its propogation in citizens of a liberal society is self-justifying.

This is the position, I think, that actually motivated Dewey to articulate his “habits of mind” conception of OSI.  True, he dramatically asserted that the “future of our civilization depends upon the widening spread and deepening hold of the scientific habit of mind,” a claim that could (particularly in light of Dewey's admitted attention to the role of liberal education in democracy) reasonably be taken as evidence that he believed this disposition to be instrumental to civic competence. 

But there’s a better reading, I think. “Scientific method,” Dewey wrote, “is not just a method which it has been found profitable to pursue in this or that abstruse subject for purely technical reasons.”

It represents the only method of thinking that has proved fruitful in any subject—that is what we mean when we call it scientific. It is not a peculiar development of thinking for highly specialized ends; it is thinking so far as thought has become conscious of its proper ends and of the equipment indispensable for success in their pursuit.

The advent of science’s way of knowing marks the perfection of a human capacity of singular value.  The habits of mind integral to science enable a person “[a]ctively to participate in the making of knowledge,” which Dewey idenfies as “the highest prerogative of man and the only warrant of his freedom.”

What in Dewey’s view makes the propagation of scientific habits of mind essential to the “future of our civilization,” then, is that only a life informed by this disposition counts as one “governed by intelligence.”  “Mankind,” he writes “so far has been ruled by things and by words, not by thought, for till the last few moments of history, humanity has not been in possession of the conditions of secure and effective thinking.” “And if this consummation” of human rationality and freedom is to be “achieved, the transformation must occur through education, by bringing home to men’s habitual inclination and attitude the significance of genuine knowledge and the full import of the conditions requisite for its attainment.”

To believe that we must learn to measure the attainment of scientific habits of mind in order to perfect our ability to propagate them honors Dewey’s inspiring vision.  To insist that the value of what we would then be measuring depends on the contribution that cultivating scientific habits of mind would make to resolution of particular political disputes, or to the erasure of every last sentimental vestige of the ways of knowing that science has replaced, does not.

Reading list.

 

Saturday
Jan262013

Intense battle for "I [heart] Popper/Citizen of the Liberal Republic of Science" t-shirt

No blog post today. Don't want to distract from the fierce competition to claim the prize in the first "HFC? CYPHIMU!" contest.

Or even the less (for now) fierce battled being waged for the "Cultural Cognition Lab Cat Scan Experiment" t-shirt (Angie is crushing the field, but it's not over until the fat cat yowls).

Friday
Jan252013

Does the cultural affinity of a group's members contribute to the group's collective intelligence?

Likely the 1,000's of you who have already submitted entries into the pending "HFC! CYPHIMU? contest, the winner of which will be awarded a beautiful  "I am a citizen of the Liberal Republic of Science/I ♥ Popper!”  t-shirt (Jon Baron currently sits atop the leader board, btw), are bored and wishing you had something else to do.  

Well how about this?

First, read this fascinating study of "c," a measure of intelligence that can be administered to a collective entity.

 The study was first published in Science (2 yrs ago; fortunately, one of the authors pulled me from the jaws of entropy and  brought the article to my attention only yesterday!).

The authors show that the "collective intelligence" of groups assigned to work on problem tasks admits of reliable measurement by indicators akin to the ones used to measure "individual intelligence." An influential measure of individual intelligence is called the "g factor," or simply g. Thus, the authors call their collective intelligence measure "c factor" or "c."

C is predicted in part by the average intelligence of the group's members and by the intelligence of its smartest (highest-scoring on g) member. That it would be is not so surprising, given existing work on the predictors of group decisionmaking proficiency.

The really cool thing (aside from the proof that it was possible to form a reliable and valid measure of c) was the authors' finding that other interesting individual group-member characteristics also make an important contribution to c. One of these was how many women are in the group (compare with the recent claim by female members of the Senate that part of the reason Congress is so dysfunctional is that aren't enough female members; maybe, maybe not).

Another was the average score of the groups' members on a "social sensitivity" scale. Social sensitivity here measures, in effect, how emotionally perceptive an individual is. The better group members were at "reading" other's intentions, the more cooperatively and productively they engaged one another, the researchers found. This disposition in turn raised the "collective intelligence" of the group -- that is, enabled it to solve more problems more efficiently.  

Not mind-blowingly surprising, either I suppose. But if you think that social science is mainly about establishing mind-blowingly counterintuitive things, you are wrong, and will believe lots of invalid studies. Social science is mainly about figuring out which competing plausible conjectures are true

The conjectures that informed and were supported by this cool study were merely amazingly interesting, amazingly thought provoking, and likely amazingly useful to boot.

Second, now tell me what you think the connection might be between c and cultural cognition.  

As every schoolboy and -girl today knows, "cultural cognition" refers to the tendency of individuals to conform their perceptions of risk and other policy-relevant facts to ones that predominate in their cultural group. CCP studies this phenomenon, using experiments and other empirical methods to identity the mechanisms it comprises.

It is often assumed -- indeed, sometimes I myself and other studying cultural cognition say -- that cultural cognition is a "bias."

In fact, I don't believe this.  I believe instead that cultural cognition is intrinsic, even essential, to human rationality.

The most remarkable feature of human rationality, I'd say, is that individuals are able to recognize what is collectively known.  

Particularly, when a society is lucky enough to recognize that science's way of knowing is the most reliable way to know things, collective knowledge can be immense.  What's known collectively will inevitably outstrip what any individual member of the society can ever comprehend on his or her own--even if that individual is a scientist!

Accordingly, as my colleague Frank Keil has emphasized, individuals can participate in collective knowledge -- something that itself is a condition of there being much of it -- only if they can figure out what's known without being able to understand it. In other words, they must become proficient at knowing who knows what.  The faculty of rational perception involved in being able to figure this out reliably is both essential and amazing.

Well, it turns out that people are simply better at exercising this rational faculty -- of being able to reliably determine who knows what about what-- when they are in groups of people with whom they share a cultural affinity.  Likely they are just better able to "read" such people -- to figure out who actually knows something & who is just bull shitting.

Likely, too, people are better at figuring who knows what about what in these sorts of affinity groups because they are less likely to fight with one another. Conflict will interfere with their ability to exchange knowledge with one another.

Actually, there's no reason to think people can exercise the faculty of perception involved in figuring out who knows what about what only within cultural affinity groups.

On the contrary, there is evidence that culturally diverse groups will actually do better than culturally homogeneous ones if they stay at it long enough to get through an initial rough patch and develop ways of interacting that are suited for discerning who knows what within their particular group.

But in the normal run of things, people probably won't, spontaneously, want to make the effort or simply won't (without a central coordination mechanism) be able to get through the initial friction, and so they will, in the main, tend to learn who knows what about what within affinity groups. That's where cultural cognition comes from.

Generally, too, it works --so long as the science communication environment is kept free of the sorts of contaminants that make culturally diverse groups come to see positions on particular facts -- like whether the earth is heating up or whether the HPV vaccine has health-destroying side effects -- as markers of group membership and loyalty. When that happens, the members of all cultural groups are destined to be collectively dumb as 12 shy of a dozen, and collectively very unwell off.

So now -- my question: do you suppose the cultural affinity of a groups' members is a predictor of c? That is, do you suppose c will be higher in groups whose members are more culturally homogeneous?

Or do you suppose that culturally diverse groups might do better -- even without a substantial period of interaction -- if their individual members "social sensitivity" scores are high enough to offset lack of cultural affinity?

Wouldn't these be interesting matters to investigate? Can you think of other interesting hypotheses?

What's that? You say you won't offer your views on this unless there is the possibility of winning a prize?.... Okay. Best answer will get this wonderful "Cultural Cognition Lab" t-shirt.

Friday
Jan252013

What is the "political economy forecast" for a carbon tax? What are the benefits of such a policy for containing climate change? ("HFC! CYPHIMU?" Episode No. 1)

In the spirit of CCP’s wildly popular feature, “WSMD? JA!,” I’m introducing a new interactive game for the site called: “Hi, fellow citizen! Can you please help increase my understanding?”—or “HFC! CYPHIMU?” The format will involve posting a question or set of related questions relating to a risk or policy-relevant fact that admits of scientific inquiry & then opening the comment section to answers. The questions might be ones that simply occur to me or ones that any of the 9 billion regular subscribers to this blog are curious about. The best answer, as determined by “Lil Hal,”™ a friendly, artificially intelligent robot being groomed for participation in the Loebner Prize competition, will win a “Citizen of the Liberal Republic of Science/I Popper!” t-shirt!

I have a couple of questions  that I’m simply curious about and hoping people can help me to figure out the answers to.

BTW, I’m using “figuring out the answer” as a term of art.

It doesn’t literally mean figuring out the answer! I think questions to which “the answer” can be demonstrably “figured out” tend not to be so interesting as ones that we believe do have answers but that we agree turn on factors that do no admit of direct observation, forcing us to draw inferences from observable, indirect evidence. For those, we have to try to "figure out" the answer in a disciplined empirical way by (1) searching for observable pieces of evidence that we believe are more consistent with one answer than another, (2) combining that evidence with all the other evidence that we have so that we can (3) form a provisional answer (one we might well be willing to act on if necessary) that is itself (4) subject to revision in light of whatever additional evidence of this sort we might encounter.

Accordingly, any response that identifies evidence that furnishes reason for treating potential answers as more likely or less than we might regard them without such evidence counts as “figuring out the answer.” Answers don’t have to be presented as definitive; indeed, if they are, that would likely be a sign that they aren’t helping to “figure out” in the indicated sense!

Oh-- answers that identify multiple sources of evidence, some of which make one answer more likely and some less relative to a competing one, will be awarded "I'm not afraid to live in a complex universe!" bonus points.

Okay, here are my “questions”:

a. If one is assessing the prospects for enacting a carbon tax (or some comparable form of national legislation aimed at reducing U.S. CO2 emissions), how big a factor is public opinion in favor of “doing something to address climate change”?

b. How much of a contribution would a carbon tax—or any other U.S. policy aimed at reducing the impact of atmospheric concentrations of CO2—make to mitigating or constraining global temperature increases or adverse impacts therefrom?

Some explanation for the questions will likely help to elicit answers of the sort I am interested in:

a. If one is assessing the prospects for enacting a carbon tax (or some comparable form of national legislation aimed at reducing U.S. CO2 emissions), how big a factor is public opinion in favor of “doing something to address climate change”?

This is essentially a political economy question.

Researchers who have performed opinion surveys often present evidence that there is growing public support—and possibly even “majority” support—in the U.S. for policies that would constructively address the risks posed by climate change. This conclusion—and for this question, please accept it as correct even if you doubt the methods of these researchers —is in turn treated as support for the proposition that efforts to enact a carbon tax or similar legislation aimed at reducing carbon emissions in the U.S. are meaningfully likely to succeed.

Of course, we all know that “majority public support” does not necessarily translate in any straightforward sense into adoption of policies. If it did, the U.S. would have enacted “gun control” measures in the 1970s or 1980s much stricter than the ones President Obama is now proposing. We’d have a muscular regime of campaign-finance regulations. We wouldn’t have massive farm subsidies, and tax loopholes that enable major corporations to pay (literally) no U.S. income tax. Etc.

The “political economy climate” is complex—if not as complex as the natural one, then pretty close! Forecasts of what is likely or possible depend on the interaction of many variables, of which “public support” is only one.

So, can you please help me increase my understanding? What is the political-economy model that informs the judgment of those who do believe increased public support for “action on climate change” meaningfully increase the likelihood of a carbon tax? What are the mechanisms and practical steps that will translate this support into enactment of policy?

b. How much of a contribution would a carbon tax—or any other U.S. policy aimed at reducing the impact of atmospheric concentrations of carbon—make to mitigating or constraining global temperature increases or adverse impacts therefrom?

This, obviously, is a “climate science” question, primarily, although it might also be a political economy question.

The motivation behind the question consists of a couple of premises. One is that the U.S. is not the only contributor to atmospheric CO2; indeed, China has apparently overtaken us as the leader, and developing countries, most importantly India, will generate more and more greenhouse gases (not just CO2, but others, like Freon) as they seek to improve conditions of living for their members.

The second is scientific evidence relating to the climate impact of best-case scenarios on future atmospheric CO2 levels. Such evidence, as I understand it (from studies published in journals like Nature and the Proceedings of the National Academy of Sciences) suggests that earlier scientific projections of the contribution that CO2 reductions and ceilings can make to forestalling major, adverse impacts were too optimistic. Even if the U.S. stopped producing any CO2—even if all nations in the world did—there’d still be catastrophic effects as a result of climate change.

As an editorial in Nature put it,

The fossil fuels burned up so far have already committed the world to a serious amount of climate change, even if carbon emissions were somehow to cease overnight. And given the current economic turmoil, the wherewithal to adapt to these changes is in short supply, especially among the world's poor nations. Adaptation measures will be needed in rich and poor countries alike — but those that have grown wealthy through the past emission of carbon have a moral duty to help those now threatened by that legacy.

The latest scientific research suggests that even a complete halt to carbon pollution would not bring the world's temperatures down substantially for several centuries. If further research reveals that a prolonged period of elevated temperatures would endanger the polar ice sheets, or otherwise destabilize the Earth system, nations may have to contemplate actively removing CO2from the atmosphere. Indeed, the United Nations Intergovernmental Panel on Climate Change is already developing scenarios for the idea that long-term safety may require sucking up carbon, and various innovators and entrepreneurs are developing technologies that might be able to accomplish that feat. At the moment, those technologies seem ruinously expensive and technically difficult. But if the very steep learning curve can be climbed, then the benefits will be great.

I’m curious, then, what is the practical understanding of how a carbon tax or any other policy to reduce CO2 emissions in the U.S. will contribute to “doing something about climate change.”

Am I incorrect to think that such steps by themselves will not contribute in any material way?

If so, is the idea that U.S. efforts to constrain emissions will spur other nations to limit their output? What is the international political economy model for that expectation?

Even if other nations do enact measures that make comparable contributions to limiting atmospheric CO2 emissions, how much of a difference will that make given, as the Nature editorial puts it, “[t]he latest scientific research suggests that even a complete halt to carbon pollution would not bring the world's temperatures down substantially for several centuries?”

Thanks to anyone who can help make me smarter on these issues!

Monday
Jan212013

A case study: the HPV vaccine disaster (Science of Science Communication Course, Session 1)

This semester I'm teaching a course entitled the Science of Science Communication. I've posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

 

1. The HPV vaccine disaster

HPV stands for human papilomavirus. It is a sexually transmitted disease.

The infection rate is extremely high: 45% for women in their twenties, and almost certainly just as high for men, in whom the disease cannot reliably be identified by test.

The vast majority of people who get HPV experience no symptoms.

But some get genital warts.

And some get cervical cancer.

Some of them--over 3500 women per yr in U.S. -- die. 

In 2006, the FDA approved an HPV vaccine, Gardasil, manufactured by the New Jersey pharmaceutical firm Merck. Gardasil is believed to confer immunity to 70% of the HPV strains that cause cervical cancer. The vaccine was approved only for women, because only in women had HPV been linked to a “serious disease” (cervical cancer), a condition of eligibility for the fast-track approval procedures that Merck applied for. Shortly after FDA approval, the Center for Disease Control recommended universal vaccination for adolescent girls and young women.

The initial public response featured intense division. The conflict centered on proposals to add the vaccine—for girls only—to the schedule of mandatory immunizations required for middle school enrollment. Conservative religious groups and other mandate opponents challenged evidence of the effectiveness of Gardasil and raised concerns about unanticipated (or undisclosed) side-effects. They also argued that vaccination would increase teen pregnancy and other STDs by investing teenage girls with a false sense of security that would lull them into engaging in unprotected, promiscuous sex. Led by women’s advocacy groups, mandate proponents dismissed these arguments as pretexts, motivated by animosity toward violation of traditional gender norms.

In 2007, Texas briefly became the first state with a mandatory vaccination requirement when Governor Perry—a conservative Republican aligned with the religious right—enacted one by executive order. When news surfaced that Perry had accepted campaign contributions from Merck (which also had hired one of Perry’s top aids to lobby him), the state legislature angrily overturned the order.

Soon thereafter, additional stories appeared disclosing the major, largely behind-the-scene operation of the pharmaceutical company in the national campaign to enact mandatory vaccination programs.  Many opinion leaders who previously had advocated the vaccine now became critics of the company, which announced that it was “suspending” its “lobbying” activity. Dozens of states rejected mandatory vaccination, which was implemented in only one, Virginia, where Merck had agreed to build a vaccine-manufacturing facility, plus the District of Columbia.

Current public opinion is characterized less by division than by deep ambivalence. Some states have enacted programs subsidizing voluntary vaccination, which in other states is covered by insurance and furnished free of cost to uninsured families by various governmental and private groups. Nevertheless, “uptake” (public health speak for vaccination rate) among adolescent girls and young women is substantially lower here (32%) than it is in nations with inferior public health systems, including ones that likewise have failed to make vaccination compulsory (e.g., Mexico, 67%, and Portugal, 81%). The vaccination rate for boys, for whom the FDA approved Gardasil in 2009, is a dismal 7%.

2. What’s the issue? (What “disaster”?)

The American pubic tends to have tremendous confidence in the medical profession, and is not hostile to vaccinations, mandatory or otherwise (I’ll say more about the “anti-vaccine movement” another time but for now let’s just say it is quite small). When the CDC recommended vaccination for H1N1 in December 2009, for example, polls showed that a majority of the U.S. population intended to get the vaccine, which ran out before the highest-risk members of the population—children and the elderly—were fully inoculated. In a typical flu season, uptake rates for children usually exceed 50%.

The flu, of course, is not an STD. But Hepatitis B is. The vast majority of states implemented mandatory HBV vaccination programs—without fuss, via administrative directives issued by public health professionals—after the CDC recommended universal immunization of infants in 1995. Like the HPV vaccine, the HBV vaccine involves a course of two to three injections.  National coverage for children is over 90%.

There are (it seems to me!) arguments that a sensible sexually active young adult could understandably, defensibly credit for forgoing the HPV vaccination, and that reasonable parents and reasonable citizens could for not having the vaccine administered to their children and mandated for others’. But the arguments are no stronger than—not not at all different from—the ones that could be made against HBV vaccination. They don’t explain, then, why in the case of the HPV vaccine the public didn’t react with its business-as-usual acceptance when public health officials recommended that children and young adults be vaccinated.

What does? That question needs an answer regardless of how one feels about the HPV vaccine or the public reaction to it—indeed, in order even to know how one should feel about those matters.

3. A polluted science communication environment

The answer—or at least one that is both plausible and supported by empirical evidence—is the contamination of the “science communication environment.”  People are generally remarkably proficient at figuring out who knows what; they are experts in identifying who the experts are and reliably discerning what those with expertise counsel them to do. But that capacity—that faculty of reasoning and perception—becomes disabled (confused, unreliable) when an empirical fact that admits of scientific investigation provokes controversy among groups united by shared values and perspectives.

Most of us have witnessed this situation via casual observation; scholars who carefully looked at parents trying to figure out what to think about the HPV vaccine saw that they were in that situation. They saw, for example, the mixture of shame and confusion experienced by an individual mother who acknowledged (admitted; confessed?) in the midst of a luncheon conversation with scandalized friends (also mothers) that she had allowed her middle-school daughter to be vaccinated (“what--why? . . .”; “Well, because that’s what the doctor advised . . . .” “Then, you had better find a new doctor, dear . . . . ”).

Scholars using more stylized but more controlled methods to investigate how people form perceptions of the HPV vaccine report the same thing.  In one, researchers tested how exposure to two versions of a fictional news articles affected public support for mandatory HPV vaccination.  Both versions described (real) support for mandatory vaccination by public health experts. But one, in addition, adverted without elaboration to “medical and political conflict” surrounding a mandatory-vaccine proposal. The group exposed to the “controversy” version of the report were less likely to support the proposal—indeed, on the whole were inclined to oppose it—than those in the “no controversy” group. This effect, moreover, was as strong among subjects inclined to support mandatory vaccination policies generally as among those who weren’t/

The study result admits (I admit!) of more than one plausible explanation. But one is that being advised the matter was “politically controversial” operated as a cue that generated hesitation to credit evidence of expert opinion among people otherwise disposed to use it as their guide on public health issues.

Another study done by CCP bolsters this interpretation. That one assessed how members of the public with diverse cultural outlooks assessed information about the risks and benefits of HPV vaccination. Subjects of opposing worldviews were inclined to form opposing beliefs when evaluating information on the risks and benefits of the vaccine. Yet the single most important factor for all subjects, the study found, was the position taken by “public health experts.” Sensibly & not surprisingly, people of diverse values share the disposition to figure out what credible, knowledge experts are saying on things that they themselves lack the expertise to understand but that are important for the wellbeing of themselves and others.

Whether the subjects viewed experts as credible and trustworthy, however, was highly sensitive to their tacit perception of the experts’ cultural values. This didn’t actually have much impact on subjects’ risk perceptions--unless they were exposed to alignments of arguments and (culturally identifiable) experts that gave them reason to think the issue was one that pit members of their group against another in a pattern that reinforced the subjects’ own cultural predispositions toward the HPV vaccine. That’s when the subjects became massively polarized.

That’s the situation, moreover, that people in the world saw, too. From the moment culturally diverse citizens first tuned in, the signal they were getting on the science-communication frequency of their choice was that “they say this; we, on the other hand, really know that.” 

Under these conditions, the manner in which people evaluate risk is psychologically equivalent to the one in which fans of opposing football teams form their impressions of whether the receiver who caught the last-second, hail-Mary pass was out of bounds or in.  Anyone who thinks this is the right way to for people to engage information of consequence to their collective well-being—or who thinks that people actually want to form their beliefs this way—is a cretin, no matter what he or she believes about the HPV vaccine.

4. An avoidable “accident”

There was nothing necessary about the HPV vaccine disaster.  The HPV vaccine took a path different from the ones travelled by the H1N1 vaccine in 2009, and by the HBV vaccine in 1995 to the present, as a result of foreseeably bad decisions, stemming from a combination of strategic behavior, gullibility, and collective incapacity.

Information about the risks and benefits of HPV vaccine came bundled with facts bearing culturally charged resonances. It was a vaccine for 11-12 year old girls to prevent contraction of a sexually transmitted disease.  There was a proposal to make the vaccine mandatory as a condition of school enrollment.  The opposing stances of iconic cultural antagonists were formed in response to (no doubt to exploit the conflictual energy of) the meanings latent in these facts—and their stances became cues for ordinary, largely apolitical individuals of diverse cultural identities.

These conditions were all an artifact of decisions Merck self-consciously made about how to pursue regulatory approval and subsequent marketing of Gardasil. It sought approval of the vaccine for girls and young women only in order to invoke “fast track” consideration by the FDA. It thereafter funded—orchestrated, in a manner that shielded its own involvement—the campaign to promote adoption of mandatory vaccination programs across the states.  To try to “counterspin” the predictable political opposition to the vaccine, it hired an inept sock puppet—“Oops!”—whose feebly scripted performance itself enriched the cultural resources available to those seeking to block the vaccine.

Had Merck not sought fast-track approval and pushed aggressively for quick adoption of mandatory vaccination programs, the FDA would have approved the vaccine for males and females just a few years later, insurance companies plus nongovernmental providers would have furnished mechanisms for universal vaccination sufficient to fill in any gaps in stated mandates, which would have been enacted or not by state public health administrators largely removed from politics. Religious groups—which actually did not oppose FDA approval of the HPV vaccine but only the proposal to mandate it—wouldn’t have had much motivation or basis for opposing such a regime.

As a result, parents would have learned about the risk and benefits of the HPV vaccine from medical experts of their own choosing—ones chosen by them, presumably, because they trusted them—without the disorienting, distracting influence of cultural conflict. They would have learned about it, in other words, in the same conditions as the ones in which they now encounter the same sort of information on the HBV and other vaccines. That would have been good for them.

But it wouldn’t have been good for Merck. For by then, GlaxoSmithKline’s alternative vaccine would have been ready for agency approval, too, and could have competed free of the disadvantage of what Merck hoped would be a nationwide set of contracts to supply Gardasil to state school systems.

Is this 20/20 hindsight? Not really; it is what many members of the nation’s public health community saw at the time. Many who supported approval of Gardasil still opposed mandatory vaccination, both on the grounds that it was not necessary for public health and likely to back fire. Even many supporters of such programs—writing in publications such as the New England Journal of Medicine—conceded that “vaccination mandates are aimed more at protecting the vaccinee than at achieving herd immunity”—the same economic-subsidy rationale that was deemed decisive for mandating HPB vaccination.

These arguments weren’t rejected so much as never even considered meaningfully. Those involved in the FDA and CDC approval process weren’t charged with and didn’t have the expertise to evaluate how the science communication environment would be affected by the conditions under which the vaccine was introduced.

So in that sense, the disaster wasn’t their “fault.” It was, instead, just a foreseeable consequence of not having a mechanism in our public health system for making use of the intelligence and judgment at our disposal for dealing with science communication problems that are actually foreseen.

Whose fault will it be if this happens again?

5. Wasted knowledge

The likely “public acceptance” of an HPV vaccine was something that public health researchers had been studying for years before Gardasil was approved. But the risk that public acceptance would be undermined by a poisonous science communication environment was not something that those researchers warned anyone about. 

Instead, they reported (consistently, in scores of studies) that acceptance would turn on parents’ perceptions of the cost of the vaccine, its health benefits, and its risks, all of which would be shaped decisively by parents’ deference to medical expert opinion. 

This advice was worse than banal; it was disarmingly misleading. Public health researchers anticipated that a vaccine would be approved only if effective and not unduly risky, and that it would be covered by insurance and economically subsidized by the government. Those were reasonable assumptions. What wasn’t reasonable was the fallacious conclusion (present in study after study) that therefore all public health officials would have to do to promote “public acceptance” was tell people exactly these things. 

Things don’t work that way. And I’m not announcing any sort of late-breaking, hot-off- the-press-of-Nature-or-Science-or-PNAS news when I say that.

Social psychology and related disciplines are filled with knowledge about the conditions that determine how ordinary, intelligent people make sense of information about risk and identify who they can trust & when to give them expert advice.  The public health literature is filled with evidence of the importance of social influences on public perceptions of risks—e.g., those associated with unsafe sex and smoking. 

That knowledge could have been used to generate insight that public health officials could have used to forecast the impact of introducing Gardasil in the way it was introduced.

It wasn’t. That scientific knowledge on science communication was wasted. As a result, much of the value associated with the medical science knowledge that generated Gardasil has been wasted too. 

Session reading list.