follow CCP

Recent blog entries
Wednesday
Feb062013

Yet another installment of: "I only *study* science communication ..." 

Man, I suck at communicating!

I’ve now received 913 messages (in addition to many many comments) from scientists saying  “I attended your recent presentation, and you did fine—everyone loved you. Seriously. Don’t jump – here’s a number to call for help.  Okay? Okay?”

I see exactly what happened, of course. Despite my intentions, I came across like whining, self-pitying baby, because I wrote something that made me sound like a whining, self-pitying baby!

Actually, the potential miscommunication I am most anxious to fix is any intimation that I felt the audience at the  North American Carbon Program meeting made me feel I wasn't playing a constructive role in the discussion.  Definitely no one did in Q&A.  And after, the comments from the many people who lingered to discuss consisted of "very interesting!" (n = 3)  "thanks for giving us something to think about," (n = 2)  & "[really interesting observation/question relating to the data & issues]” (n = 7). (Like I said in the talk, it is essential to collect data, and not just go on introspection, when assessing the impact of science communication strategies.)

The source of the disappointment was wholly internal.  Also—but please don’t take this as reason to console me; I’m fine!—I remain convinced it was warranted.  I have proof: interrogating the feeling has enabled me to learn something.

So let me try this again . . . .

Something astonishing and important happened on  Monday.

I got the opportunity to address a room full of scientists who, by showing up (& not leaving for 2 hrs!), by listening intently, by asking thoughtful questions, by sharing relevant experiences, and by offering reasonable proposals proved that they, like me, see fixing the science communication problem as one of the most pressing and urgent tasks facing our society.

Of course, I stand by my position (subject, forever, to revision in light of new evidence) on what the source of the problem is. Also, I am happy, but hardly surprised, to learn that members of the audience didn’t at all resent my registering disagreement when I felt doing so would serve the goal of steering them—us—clear of what I genuinely believe to be false starts and deadends.

What disappoints me is not that I felt obliged to say “no,” "I don't think so," and “not that.”

It is that I failed to come fully prepared to identify, for an audience of citizen scientists who afforded me the honor of asking for my views, what I believe they can do as scientists to help create a science communication environment in which diverse citizens can be expected to converge on the best available scientific evidence as they deliberate over how best to secure their common ends.

I said (in my last post), “the scientist’s job is to do science, not communicate it.”  I didn’t convey my meaning as clearly as I wish I had (because, you see, science communication is only a hobby for me; my job is to contribute to scientific understanding of it).

Of course, scientists “communicate” as part of their job in being scientists.  But that communication is professional; it is with other scientists. Their job is not to communicate  their science to nonexperts or members of the public.

This is a very critical point to get clear on so I will risk going on a bit. 

The mistake of thinking that doing valid science is the same as communicating the validity of valid science is what got us into the mess we are in! Communicating and doing are different; and the former is something that admits of and demands its own independent scientific investigation.

In addition, the expert use of the scientific knowledge that the study of science communication creates is something that requires professional training and skill suited to communicating science, not doing science. Expecting the scientist to communicate the validity of her science because she had the professional skill needed to generate it is like expecting the players in a major league baseball game to do radio play-by-play at the same time, and then write up sportspage accounts for the fans who couldn’t tune in.

Yes, yes, there’s Carl Sagan; he’s the Tim McCarver of science communication. For sure be Carl Sagan or better still Richard Feynman if you possibly can be, b/c as I said, if you can help me and other curious citizens to participate in the wonder of knowing what is known to science, you will be conferring an exquisite benefit of immeasureable intrinsic value on us! Still, that won’t solve the climate change impasse either.

But neglecting to add this was my real mistake: just because what you say in or about your job as a scientist won’t dispel controversy over climate change does not mean that it isn’t your duty as a citizen scientist to contribute to something only scientists are in a position to do and that is essential not only to dispelling controversy over climate science but to addressing what caused that controversy and numerous others (nuclear power . . . HPV vaccine), and that will continue to cause us to experience even more of the same (GM foods . . . synthetic biology) if not corrected.

The cause of the science communication problem is the disjunction between the science of science communication and the practice of science and science-informed policymaking.  We must integrate them—so that we can learn as much as we can about how to communicate science, and never fail to use as much as we know about how to make what’s known to science known by those whose well-being it can serve.

Coordinated, purposeful effort by the institutional and individual members of the scientific community are necessary to achieve this integration (not sufficient; but I’ll address what others must do in part 5,922 of this series of posts). That was the message—the meaning—of the National Academy of Science’s “Science of Science Communication” Sackler Colloquium last spring.

Universities are where both science and professional training of those whose skills are informed by science take place. Universities—individually and together—must organize themselves to assure that they contribute, then, to the production of knowledge and skill that our society needs here.

What does that mean? Not necessarily one thing (such as, say, a formal “science of science communication” program or whathaveou). But any of a large number of efforts that a university can make, if it proceeds in a considered and deliberate way, to make sure that its constituent parts (its various social science graduate departments, its professional schools, its interdisciplinary centers and whatnot) predictably, systematically interact in a manner that advances the integration of the forms of knowledge that must be combined.

So make this happen

Combine with others within your university and petition, administer, or agitate as necessary to get your institution both to understand and make its contribution to this mission in whatever way intelligent deliberation recommends.

Model it yourself by teaching—or better yet co-teaching with someone in another discipline that also should be integrated—a course called the “Science of Science Communication” that’s cross-listed in multiple relevant programs.

Infect a brilliant student or two or fifty with excitement and passion for contributing to the creation of the knowledge that we need—and do what you can to demonstrate that should they choose this path their scholarly excellence will be conferred the recognition it deserves (or at least won’t compromise their eligibility for tenure!).

Is that it? No other things that scientists can do? 

I’m sure there are others (to be taken up in later posts, certainly, I promise). But making their universities bear their share of the burden to contributing to the collective project of melding science and science-informed policymaking with the science of science communication is the single most important thing you can do as a scientist to solve the science communication problem.

But don’t stop doing your science, and just keep up the great work (no need to change how you talk) in that regard.

Okay. Next question?  

Tuesday
Feb052013

Another installment of: "I only study science communication -- I didn't say I could do it!" 

Gave a talk yesterday at the North American Carbon Program’s 2013 meeting, “The Next Decade of Carbon Cycle Research: From Understanding to Application.

Obviously, I would have been qualified to be on any number of panels (best fit would have been “Model-data Fusion: Integrated Data-Model Approaches to Carbon Cycle Research”), but opted to serve on “Communicating Our Science” one (slides here).

Bob Inglis (click to learn more!)The highlights for me were the excellent presentations by Jeff Kiehl, an NCAR scientist who has really mastered the art of communicating complicated and controversial science to diverse audiences, and former Rep. Bob Inglis, who now heads up the Energy & Enterprise Institute, a group that advocates using market mechanisms rather than centralized regulation to manage carbon emissions. I also learned a lot from the question/answer period, where scientists related their experiences, insights, & concerns.

To be honest, I’m unsure that I played a constructive role at all on the panel, & I’ve been pondering this.

The theme of my talk was “the need for evidence-based science communication.”  I stressed the importance of proceeding scientifically in making use of the knowledge that the science of science communicate generates. Don't use that knowledge to construct stories; use it to formulate hypotheses about what sort of communication strategy is likely to work -- and then measure the impact of that strategy, generating information that you & others can use to revise and refine our common understanding of what works and what doesn't.

I'm happy w/ what I had to say about all of this, but here's why I’m not really sure it was useful:

1.  I don’t think I was telling the audience what they wanted to know. These were climate scientists, and basically they were eager to figure out how they could communicate their science more effectively.

My message was one aimed, really, at a different audience, those whom  I think of as “science communication practitioners.”  Like Bob Inglis, who is trying to dispel the fog of accumulated ideological resonances that he believes obscures from citizens who distrust government regulation the role that  market mechanisms can play in reducing climate-change risks. Or Jeff Kiehl, who is trying to figure out how to remove from the science communication environment the toxic partisan meanings that disable the rational faculties that citizens typically use to figure out what is known to science.  Or municipal officials and others who are trying to enable parties in stakeholder deliberations on adaptation in Florida and elsewhere to make collective decisions informed by the best available science.

2.  Indeed, I think I told the audience a number of things its members actually didn’t want to hear. One was that it’s almost certainly a mistake to think that how scientists themselves communicate their science will have much impact on the quality of public engagement with climate science.

For the most part, ordinary members of the public don’t learn what is known to science from scientists. They learn it from interactions with lots of other nonscientists (typically, too, ones who share their values) in environments that are rich with cues that identify and certify what’s collectively known.

There’s not any meaningful cultural polarization in the U.S., for example, over pasteurization of milk. That’s not because biologists do a better job explaining their science than climate scientists have done explaining theirs. It’s because the diverse communities in which people learn who knows what about what are reliably steering their members toward the best available scientific evidence on this issue—as they are on a countless number of other ones of consequence to their lives.

Those communities aren’t doing that on climate change because opposing positions on that issue have come to be seen as badges of loyalty to opposing cultural groups. It’s possible, I think to change that.  But the strategies that might accomplish that goal have nothing to do with the graphic representations (or words) scientists use for conveying the uncertainty associated with climate-model estimates.

I also felt impelled to disagree with the premises of various other genuinely thoughtful questions posed by the audience. E.g., that certain groups in the public are skeptical of climate change because it threatens their “interests” or lifestyle as affluent consumers of goods associated with a fossil-fuel driven economy. In fact (I pointed out), wealth in itself doesn’t dispose people to downplay climate change risks; it magnifies the polarization of people with different values

Maybe I was being obnoxious to point this out. But I think scientists should want their views about public understandings of science to accord with empirical evidence.

I also think it is important to remind them that if they make a claim about how the public thinks, they are making an empirical claim. They might be right or they might be wrong. But personal observation and introspection aren’t the best ways to figure that out; the sort of disciplined observation, measurement, and inference that they themselves use in their own domain are.

Shrugging one's shoulders and letting empirically unsupported or contestable claims go by unremarked amounts to accepting that a discussion of science communication will itself proceed in an unscientific way.

Finally, I felt constrained to point that ordinary citizens who have the cultural identity most strongly associated with climate-change skepticism actually aren’t anti-science.

They love nanotechnology, e.g.

They have views about nuclear power that are more in keeping with “scientific consensus” (using the NAS reports as a benchmark) than those who have a recognizable identity or style associated with climate change concern.

If you want to break the ice, so to speak, in initiating a conversation with one of them about climate science, you might casually toss out that the National Academy of Sciences and the Royal Society have both called more research on geoengineering. “You don’t say,” he’s likely to respond.

Now why’d I do this? My sense is that the experience with cultural conflict over climate change has given a lot of scientists the view that people are culturally divided about them.  That’s an incorrect view—a non-evidence-based one (more on that soon, when I write up my synthesis of Session 3 of the Science of Science Communication course). 

It’s also a misunderstanding that I’m worried could easily breed a real division between scientists and the public if not corrected. Hostility tends to be reciprocated. 

It's also sad for people who are doing such exciting and worthwhile work to labor under the false impression that they aren't appreciated (revered, in fact).

3.  Finally,  I think I also created the impression that what I was saying was in tension with the great advice they were getting from the one panelist most directly addressing their central interest.

I’d say Jeff Kiehl was addressing the question that members of the audience most wanted to get the answer to: how should a climate scientist communicate with the public in order to promote comprehension and open-minded engagement with climate science?

Jeff talked about the importance of affect in how people form perceptions of risk.  The work of Paul Slovic, on whom Jeff was relying, 100% bears him out.

In my talk, I was critical of the claim that the affect-poor quality of climate risks relative, say, to terrorism risks, explains why the public isn’t as concerned about climate change as climate scientists think they should be. 

That’s a plausible conjecture; but I think it isn’t supported by the best evidence. If it were true, then people would generally be apathetic about climate change. They aren’t; they are polarized.

It’s true that affective evaluations of risk sources mediate people’s perceptions of risk. But those affective response are the ones that their cultural worldviews attach to those risk sources.  Super scientist of science communication Ellen Peters has done a kick ass study on this!

What’s more, as I pointed out in my talk, people who rely more on “System 2” reasoning (“slow, deliberate, dispassionate”) are more polarized than those who rely predominantly on affect-driven system 1.

But this is a point, again, addressed to communication professionals: the source of public controversy on climate change is the antagonistic cultural meanings that have become attached to it, not a deficit in public rationality; dispelling the conflict requires dissipating those meanings—not identifying some magic-bullet “affective image.”

What Kiehl had to say was the right point to make to a scientist who is going to talk to ordinary people.  If that scientist doesn’t know (and she might well not!) that ordinary members of the public tend to engage scientific information affectively, she will likely come off as obtuse!

What’s more, nothing in what I had to say about the limited consequence of what scientists say for public controversy over climate change implies that scientists shouldn’t be explaining their science to ordinary people, and doing so in the most comprehensible, and engaging way possible.

Lots of ordinary people want to know what the scientists do. In the Liberal Republic of Science, they have a right to have that appetite—that curiosity—satisfied!

For the most part, performing this critical function falls on the science journalist, whose professional craft is to enable ordinary members of the public to participate in the thrill and wonder of knowing what is known to science.

Secondary school science teachers, too: they inculcate exactly that wonder and curiosity, and wilily slip scientific habits of mind in under the cover of enchantment!

The scientist’s job is to do science, not communicate it.

But any one of them who out of public spiritedness contributes to the good of making it possible for curious people to share in the knowledge of what she knows is a virtuous citizen.

Regardless of whether what she's doing when she communicates with the public contributes to dispelling conflict over climate change.

Friday
Feb012013

Cultural cognition & cat-risk perceptions: Who sees what & why?

So like billions of others, I fixated on this news report yesterday:

Obvious fake! These are professional-model animals posing for staged picture. Shame on you, NYT!For all the adorable images of cats that play the piano, flush the toilet, mew melodiously and find their way back home over hundreds of miles, scientists have identified a shocking new truth: cats are far deadlier than anyone realized.

In report that scaled up local surveys and pilot studies to national dimensions, scientists from the Smithsonian Conservation Biology Institute and the Fish and Wildlife Service estimated that domestic cats in the United States — both the pet Fluffies that spend part of the day outdoors and the unnamed strays and ferals that never leave it — kill a median of 2.4 billion birds and 12.3 billion mammals a year, most of them native mammals like shrews, chipmunks and voles rather than introduced pests like the Norway rat.

The estimated kill rates are two to four times higher than mortality figures previously bandied about, and position the domestic cat as one of the single greatest human-linked threats to wildlife in the nation. More birds and mammals die at the mouths of cats, the report said, than from automobile strikes, pesticides and poisons, collisions with skyscrapers and windmills and other so-called anthropogenic causes.

My instant reaction (on G+) was: bull shit!

My confidence that I knew all the facts here -- and that the study, published in Nature Communicationswas complete trash and almost surely conducted by researchers in the pocket of the bird-feed industry -- was based on my recollection of some research I’d done on this issue a few yrs ago (I’m sure in response to a rant against cats and bird “genocide” etc.). I recalled that there was "scientific consensus" that domestic cats have no net impact on wildlife populations in the communities that people actually inhabit (yes, if you put them on an island in the middle of the Pacific Ocean, they'll wipe out an indigenous species or two or twelve).  But I figured (after posting, of course) that I should read up and see if there was any more recent research.

What I found, unsurprisingly, is either there is no scientific consensus on the net impact of cats on wildlife populations or there is no possibility any reasonable and intelligent nonexpert could confidently discern what that consensus is through the fog of cultural conflict!

Check this out:

And this:

This is definitely a job for the science of science communication!

So I’d like is some help in forming hypotheses.  E.g.,

1.  What are the most likely mechanisms that explain variance in who perceives what and why about the impact of cats on wildlife population? Obviously, I suspect motivated reasoning: people (myself included, it appears!) are conforming their perceptions of the evidence (what they read in newspapers or in journals; what they “see with their own eyes,” etc.) to some goal or interest or value extrinsic to forming an accurate judgment. But what are the other plausible mechanisms?  Might people be forming perceptions based on exogenous “biased sampling”—systematically uneven exposure to opposing forms of information arising from some influence that doesn't itself originate in any conscious or unconscious motivation to form or preserve a particular belief  (e.g., whether they live in the city or country)? Something else? What sorts of tests would yield evidence that helps to figure out the relative likelihood of the competing explanations?

2.  Assuming motivated reasoning explains the dissensus here, is the motivating influence the dispositoins that inform the cultural cognition framework? How might perceptions of the net impact of cats on wildlife populations be distributed across the Hierarchy-egalitarian and Individualist-communitarian worldview dimensions?  Why would they be distributed that way

3.  Another way to put the last set of questions: Is there likely to be any relationship between who sees what and why about the impact of cats on wildlife population and perceptions of climate change risks? Of gun risks? Of whether childhood vaccinations cause autism? Of whether Ray Lewis consumed HGH-laced deer antler residue?

4.  If the explanation is motivated reasoning of a sort not founded on the dispositions that inform the cultural cognition framework, then what are the motivating dispositions? How would one describe those dispositions, conceptually? How would one measure them (i.e., what would the observable indicators be)?

Well? Conjectures, please -- on these or any other interesting questions.

By the way, if you'd like to see a decent literature review, try this:

Barbara Fougere, Cats and wildlife in the urban environment.

 

Thursday
Jan312013

Groping the political economy elephant ... 

I was going to write an indignant post about pollution of the science communication environment by pseudo-scientists who obviously have an unreasoning cultural bias against cats, but then I read a reflective comment on my recent post on “what to advise climate science communicators."

The post comes from a thoughtful guy named Gene, who says: 

Yes, that’s the “political economy” elephant. You are right, Gene, we can't ignore it.

But let me tell you what I feel as I grope it. For sure, it's the same animal, but the texture and shape seem a bit different from what I think you are sensing. I invite others to have a go and describe what the beast feels like to them.

1.  Science communication's "political economy problem": big but how big?  

Basically, even if we had a perfect scientific understanding of how ordinary citizens make sense of scientific information – including the resources in the “science communication environment” that must be protected to assure that people are able reliably to use their rational faculties for discerning what’s known to science—there’d still be various groups and constituencies (of diverse cultural identities, and across a wide range of issues) with a stake in confusing people.

Indeed, they’d certainly know just as much about the science of science communication as those who want to use it to enhance enlightened self-government.  The science of science communication is nonproprietary, a product of the free and open exchange that is the driving engine of scientific discovery. So the bad guys can help themselves to it (they can also try to gain an edge by doing their own research, which they of course won't share, but the proprietary-knowledge producers are so dimwitted compared to the open- that we can safely ignore that detail).

Accordingly, if there is no way to constrain these actors from polluting the science communication environment, then all the knowledge associated with the new science of science of communication would be of “academic interest” only.

This is a big big problem. But I think there are a few mistakes that people tend to make that can exaggerate their perception of its magnitude, and thus risk either paralyzing or simply misdirecting those in a position to try to deal with this difficulty.

2. People overestimate the significance of misinformation.

It’s true that groups seeking deliberately to misrepresent scientific evidence contribute to disputes like the ones over climate change, nuclear power, the HPV vaccine, etc.

But the "science miscommunicators" are actually not the cause of the problem; they are a symptom of it.

The cause is a science communication environment polluted by the entanglement of risks and policy-relevant facts with toxic partisan meanings.

In that environment, ordinary people, through dynamics of cultural cognition, will aggressively misinform themselves. Even when given accurate information, they will construe it in biased ways, and thus become even more polarized.

In that environment, it will indeed be very feasible and very profitable to supply people with misinformation, because people will eagerly seek out and latch onto anything that serves their interest in maintaining identity-protective beliefs.  Satisfying this demand for misinformation will certainly make things even worse.

But the problem started earlier: when the issue in question became charged with antagonistic cultural meanings.

3. People underestimate the contribution that accident and misadventure make to polluting the science communication environment & hence the degree to which it can be avoided by a “scicom environmental protection” policy.

If they know what they are doing, the groups who recognize that they can profit from public conflict and confusion over science are going to see misleading people on facts as secondary in importance to manufacturing and disseminating cues that incline people (unconsciously, in most instances) to see particular issues—like genetically modified foods, say—as ones that pit opposing cultural groups against each other. If they can get that impression to take hold, then they can be sure that the dissemination of valid information will never really be effective in countering misinformation.  

But it is also easy to overestimate the contribution that this sort of strategic behavior makes to polluting the science communication environment.  Other factors that can be very very consequential fall into the categories of accident and misadventure.  There was plenty of accident and misadventure on climate change, including forms of communication by climate change advocates that reinforced the public impression that the issue was a cultural “us vs. them” dispute.

Accident and misadventure both contaminate the science communication environment, and make it easier for strategically minded polluters to succeed thereafter.

But we can avoid accidents and misadventures by becoming smart, and by behaving intelligently. That’s what the science of science communication is all about.

Want an example? Check out the HPV vaccine risk case study from my Science of Science Communication course.

4. Taking the “bad political economy” as given foolishly ignores opportunities to create offsetting “good political economy” forces that can restore the quality of the science communicating environment.

This was a point I made in the original post. It’s hard enough to decontaminate a toxic science communication environment, but the prospects for doing so when one has to compete with polluters is even more bleak.

But one response is to find science communication environments that aren’t already filled with pollution—and not only concentrate efforts to communicate there, but also figure out & then do what’s necessary to keep them that way.  I’ve written already about why I believe political activity at the local-level focusing on adaptation makes sense for these reasons.

But another reason it makes sense for science communicators to try to play a constructive role in local adaptation is that the deliberations going on in states like Florida, Arizona, West Virginia, Louisiana, N. & S. Carolina et al. involve a completely different alignment of interests than the national debate over reducing CO2 emissions.  Utility companies, local businesses, ordinary homeowners, municipal actors—all know they have a common stake in making their communities as resilient as they can be.  

What to do—that’s not something they will all agree on, of course. There are different possibilities, all of which with their own constellation of costs and benefits, the distribution of which also vary.

But all of these actors do want the scientific facts and do want their representatives—including their municipal leaders, their state government officials, and their congressional delegations to get them the resources they need to take smart, cost-effective action based on that scientific evidence.  

This conversation is super important. 

It’s super important not only because it affects the well-being of these communities (which climate scientists believe are likely to face significant climate-impact risks for decades to come no matter what the U.S. or any other nation does to reduce CO2 emissions).

It's also super important because the organized political activity that it involves has the potential to produce new, highly influential, intenesly interested and well-organized political constituencies whose stake in sober, informed engagement with evidence can help to counteract the influence of other constituencies (whatever side of the debate they might be on) who have a stake in confusing and distracting reflective citizens.

5. In the Liberal Republic of Science, science journalists will also contribute to containing the elephant through perfection of craft norms that censure members of their profession who aid and abet scicom environment polluters.

Check out what they did on GM Foods during the California Prop. 37 debate.  They are modeling what many other actors—from universities to foundations to scientific associations to government institutions—need to do to organize themselves in a way that takes seriously the obligation they have to protect the quality of science communication environment.

6. But all the same, the political economy problem is a huge one for the quality of the science communication environment; the “New Political Science” for the Liberal Republic of Science desperately needs some intelligence here.

But look, notwithstanding all of this, the elephant really is there, and Gene is right that we can’t ignore it.  That elephant, again, is the constraint that political economy forces will always exert on the enlightened use of the knowledge associated with the science of science communication.

The only way to tame that elephant . . . actually, this has become a bad metaphor; elephants are really nice animals. Let’s try again:

The only way to inoculate the body politic of the Liberal Republic of Science against the virus that these foreseeable political economy dynamics represent is with applied intelligence. 

The science of science communication is the new political science for an age in which democracy faces a challenge that is itself quite new: to protect at one and the same time the interest their citizens have in using the best available scientific knowledge to advance their common good and the right they are guaranteed to meaningfully govern themselves.

That science is going to require perfection of our understanding of how the political economy of democratic states influences science communication every bit as much as it will require us to perfect our understanding of the social psychology of transmitting scientific knowledge.

Wednesday
Jan302013

Respond to commentary day

I've allotted my daily blogging time to reading the many interesting comments addressing yesterday's "What to advise communicators of climate science?" post, and responding to some.  Nothing I could say would be as insightful as those anyway! So read them, and please add your own views.

Tuesday
Jan292013

What would I advise climate science communicators?

This is what I was asked by a thoughtful person who is assisting climate-science communicators to develop strategies for helping the public to recognize the best available evidence--so that those citizens can themselves make meaningful decisions about what policy responses best fit their values.  I thought others might benefit from seeing my responses, and from seeing alternative or supplementary ones that the billions of thoughtful people who read this blog religiously (most, I'm told, before they even get out of bed everyday) might contribute. 

So below are the person's questions (more or less) and my responses, and I welcome others to offer their own reactions.

1. What is the most important influence or condition affecting the efficacy of science communication relating to climate change?

In my view, “the quality of the science communication environment” is the single most important factor determining how readily ordinary people will recognize the best available evidence on climate change and what its implications are for policy. That’s the most important factor determining how readily they will recognize the best available scientific evidence relevant to all manner of decisions they make in their capacity as consumers, parents, citizens—you name it.

People are remarkably good at figuring out who knows what about what. That is the special rational capacity that makes it possible for them to make reliable use of so much more scientific knowledge than they could realistically be expected to understand in a technical sense.

The “science communication environment” consists of all the normal, and normally reliable, signs and processes that people use to figure out what is known to science. Most of these signs and processes are bound up with normal interactions inside communities whose members share basic outlooks on life. There are lots of different communities of that sort in our society, but usually they all steer their respective members toward what science knows.

But when positions on a fact that admits of scientific investigation  (“is the earth heating up?”; “does the HPV vaccine promote unsafe sex among teenage girls?”) becomes entangled with the values and outlooks of diverse communities—and becomes, in effect, a symbol of one’s membership and loyalty in one or another group—then people in those groups will end up in states of persistent disagreement and confusion. These sorts of entanglements (and the influences that cause them) are in effect a form of pollution in the science communication environment, one that disables people from reliably discerning what is known to science.

The science communication environment is filled with these sorts of toxins on climate change. We need to use our intelligence to figure out how to clean our science communication environment up.

For more on these themes:

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

2. If you had three pieces of advice for those who are interested in promoting more constructive engagement with climate change science, what would they be?

A. Information about climate change should be communicated to people in the setting that is
     most 
conducive to their open-minded and engaged assessment of it.  

 How readily and open-mindedly people will engage scientific information depends very decisively on context. A person who hears about the HPV vaccine when she sees Michelle Bachman or Ellen Goodman screaming about it on Fox or MSNBC will engage it as someone who has a political identity and is trying to figure out which position “matches” it; that same person, when she gets the information from her daughter’s pediatrician, will engage it as a parent, whose child’s welfare is the most important thing in the world to her, and who will earnestly try to figure out what those who are experts on health have to say. Most of the contexts in which people are thinking about climate change today are like the first of these two. Find ones that are more like the second. They exist!

B. Science communication should be evidence-based “all the way down.” 

The number of communication strategies that plausibly might work far exceeds the number that actually will.  So don’t just guess or introspect, & don't listen to story-tellers who weave social science mechanisms into ad hoc (and usually uselessly general) "how to" instructions!

Start with existing evidence (including empirical studies) to identify the mechanisms of communication that there is reason to believe are of consequence in the setting in which you are communicating.

But don’t guess on the basis of those, either, about what to do; treat insights about how to harness those mechanisms in concrete contexts as hypotheses that themselves admit of, and demand, testing designed to help corroborate their likely effectiveness and to calibrate them.

Finally, observe, measure, and report the actual effect of strategies you use. Think how much benefit you would have gotten, in trying to decide what to do now, if you had had access to meaningful data relating to the impact (effective or not) of all things people have already tried in the area of climate science communication. Think what a shame it would be if you fail to collect and make available to others who will be in your situation usuable information about the effects of your efforts.

Aiding and abetting entropy is a crime in the Liberal Republic of Science!

C. Don’t either ignore or take as a given the current political economy surrounding climate
      change; instead, engage people in ways that will improve it. 

Public opinion does not by itself determine what policies are adopted in a democratic system. If “public approval” were all that mattered, we’d have adopted gun control laws in the 1970s stricter than the ones President Obama is now proposing; we’d have a muscular regime of campaign finance regulation; and we wouldn’t have subsidies for agriculture and oil producers, or tax loopholes that enable Fortune 500 companies to pay (literally) zero income tax.

 The “political economy climate” is as complex as the natural climate, and public opinion is only one (small) factor. So if you make “increasing public support” your sole goal, you are making a big mistake.

You also are likely making a mistake if you take as a given the existing political economy dynamics that constrain governmental responsiveness to evidence and simply try to amass some huge counterforce (grounded in public opinion or otherwise) to overcome them. That’s a mistake, in my view, because there are things that can be done to engage people in a way that will make the political economy forces climate-change science communicators have to negotiate more favorable to considered forms of policymaking (whatever they might be).

Where to engage the public, how, and about what in order to improve the political economy surrounding climate change are all matters of debate, of course. So you should consult all the evidence, and all the people who have evidence-informed views, and make the best judgment possible. And anyone who doesn’t tell you that this is the thing to do is someone whose understanding of what needs to be done should be seriously questioned.

Monday
Jan282013

Measuring "Ordinary Science Intelligence" (Science of Science Communication Course, Session 2)

This semester I'm teaching a course entitled the Science of Science Communication. Posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

 In Session 2 (i.e., our 2nd class meeting) we started the topic of “science literacy and public attitudes.” We (more or less) got through “science literacy”; “Public attitudes” will be our focus in Session 3.

As I conceptualize it, this topic is in nature of foundation laying. The aim of the course is to form an understanding of the dynamics of science communication distinctive of a variety of discrete domains. In every one of them, however, effective communication will presumably need to be informed by what people know about science, how they come to know it, and by what value they attach to science’s distinctive way of knowing . So we start with those.

By way of synthesis of the readings and the “live course” (as opposed not to “dead” but “on line”) discussion of them, I will address these points: (1) measuring “ordinary science intelligence”—what & why; (2) “ordinary science intelligence” & civic competence; (3) “ordinary science intelligence” & evolution; and (4) “ordinary science intelligence” as an intrinsic good.

1. “Ordinary science intelligence” (OSI): what is being measured & why?

There are many strategies that could be, and are, used to measure what people know about science and whether their reasoning conforms to scientific modes of attaining knowledge. To my mind at least, “science literacy” seems to conjure up a picture of only one such strategy—more or less an inventory check against a stock of specified items of factual and conceptual information. To avoid permitting terminology to short circuit reflection about what the best measurement strategy is, I am going to talk instead of ways of measuring ordinary science intelligence (“OSI”), which I will use to signify a nonexpert competence in, and facility with, scientific knowledge.

I anticipate that a thoughtful person (like you; why else would you have read even this much of a post on a topic like this?) will find this formulation question-begging. A “nonexpert competence in and facility with scientific knowledge? What do you mean by that?”

Exactly. The question-begging nature of it is another thing I like about OSI. The picture that “science literacy” conjures up not only tends to crowd out consideration of alternative strategies of measurement; it also risks stifling reflection on what it is that we want to measure and why. If we just start off assuming that we are supposed to be taking an inventory, then it seems natural to focus on being sure we start with a complete list of essential facts and methods.  But if we do that without really having formed a clear understanding of what we are measuring and why, then we’ll have no confident basis for evaluating the quality of such a list—because in fact we’ll have no confident basis for believing that any list of essential items can validly measure what we are interested in.

If you are asking “what in the world do you mean by ordinary science intelligence?” then you are in fact putting first things first. Am I--are we--trying to figure out whether someone will engage scientific knowledge in a way that assures the decisions she makes about her personal welfare will be informed by the best available evidence? Or that she’ll be able competently to perform various professional tasks (designing computer software, practicing medicine or law, etc.)? Or maybe to perform civic ones—such as voting in democratic elections? If so, what sort of science intelligence do each of those things really require? What’s the evidence for believeing that? And what sort of evidence can we use to be sure that the disposition being measured really is the one we think is necessary?

If those issues are not first resolved, then constructing and assessing measures of ordinary scientific intelligence will be aimless and unmotivated. They will also, in these circumstances, be vulnerable to entanglement in unspecified normative objects that really ought to be made explicit, so that their merits and their relationship to science intelligence can be reflectively addressed.

2. Ordinary science intelligence and civic competence

Jon Miller has done the most outstanding work in this area, so we used his self-proclaimed “what and why” to help shape our assessment of alternative measures of OSI.  Miller’s interest is civic competence. The “number and importance of public policy issues involving science or technology,” he forecasts, “will increase, and increase markedly” in coming decades as society confronts the “biotechnology revolution,” the “transition from fossil-based energy systems to renewable energy sources,” and the “continuing deterioration of the Earth’s environment.” The “long-term healthy of democracy,” he maintains, thus depends on “the proportion of citizens who are sufficiently scientifically literate to participate in the resolution of” such issues.

We appraised two strategies for measuring OSI with regard to this objective. One was Miller’s “civic science literacy” measure. In the style of an inventory, Miller’s measure consists of two scales, the first consisting largely of key fact items (“Antibiotics kills viruses as well as bacteria [true-false]”; “Doest the Earth go around the Sun, or the Sun go around the Earth?”), and the latter at recognition of signature scientific methods, such as controlled experimentation (he treats the two as separate dimensions, but they are strongly correlated: r = 0.86). Miller’s fact items form the core of the National Science Foundation’s “Science Indicators,” a measure of “science literacy” that is standard among scholars in this field. Based on rough-and-ready cutoffs, Miller estimates that only 12% of U.S. citizens qualify as fully “scientifically literate” and that 63% are “scientifically illiterate”; Europeans do even worse (5%, and 73%, respectively).

The second strategy for measuring OSI evaluates what might be called “scientific habits of mind.” The reason to call it that is that it draws inspiration from John Dewey, who famously opposed a style of science education that consists in the “accumulation of ready-made material,” in the form of canonical facts and standard “physical manipulations.” In its place, he proposed a conception of science education that imparts “a mode of intelligent practice, an habitual disposition of mind” that conforms to science’s distinctive understanding of the “ways by which anything is entitled to be called knowledge.”

There is no standard test (as far as I know!) for measuring this disposition. But there are various “reflective reasoning” measures--"Cognitive Reflection Test" (Frederick), "Numeracy" (LipkusPeters), "Actively Open Minded Thinking" (Baron, & Stanovich & West), "Lawson's Classroom Test of Scientific Reasoning"-- that are understood to assess how readily people credit, and how reliably they make active use of, the styles of empirical observation, measurement, and inference (deductive and inductive) that are viewed as scientifically valid.

The measures used for "science literacy" and "scientific habits of mind" strike me as obviously useful for many things. But it’s not obvious to me that either of them is especially suited for assessing civic competence. 

Miller’s superb work is focused on internally validating the “civic scientific literacy” measures, not externally validating them. Neither he nor others (as far as I know; anyone who knows otherwise, please speak up!) has collected any data to determine whether his “cut offs” for classifying people as “literate” or “illiterate” predicts how well or poorly they’ll function in any tasks that relate to democratic citizenship, much less that they do so better than more familiar benchmarks of educational attainment (high-school diploma and college degrees, standardized test scores, etc.). Here's a nice project for someone to carry out, then.

The various “reflective reasoning” measures that one might view as candidates for Dewey’s “habit of mind” conception of OSI have all been thoroughly vetted—but only as predictors of educational aptitude and reasoning quality generally. But they also have not been studied in any systematic ways as markers of civic aptitude.

Indeed, there is at least one study that suggests that neither Miller’s “civic science literacy” measures nor the ones associated with the “scientific habits of mind” conceptioin of OSI predict quality of civic engagement with what is arguably the most important science-informed policy issue now confronting our democracy: climate change. Performed by CCP, the study in question examined science comprehension and climate-change risk perceptions. It found that public conflict over the risks posed by climate change does not abate as science literacy, measured with the “NSF science indicator” items at the core of Miller’s “civic science literacy” index, and reflective reasoning skill, as measured with numeracy, increase. On the contrary, such controversy intensifies: cultural polarization among those with the highest OSI measured in this way is significantly greater than polarization among those with the lowest OSI.

We also discussed one more conception of OSI: call it the “science recognition faculty”.  If they want to live good lives—or even just live—people, including scientists, must accept as known by science many more things then they can possibly comprehend in a meaningful way. It follows that their well-being will thus depend on their capacity to be able to recognize what is known to science independently of being able to verify that, or understand how, science knows what it does. “Science recognition faculty” refers to that capacity.

There are no measures of it, as far as I know. It would be fun to develop some.

But my guess is that it’s unlikely any generalized deficiency in citizens’ science recognition faculty explains political conflicts over climate change, or other policy issues that turn on science, either.  The reason is that most people most of the time recognize without difficultly what is known to science on billions & billions of things of consequence to their life (e.g., “who knows how to make me better if I’m ill?”; “will flying on an airplane get me where I want to go? How about following a GPS?”; “should parents be required to get their children vaccinated against polio?”).

There is, then, something peculiar about the class of conflicts over policy-relevant science that interferes with people’s science recognition faculty. We should figure out what that thing is & protect ourselves—protect our science communication environment—from it. 

Or at least that is how it appears to me now, based on my assessment of the best available evidence.

3. Ordinary science intelligence and “belief” in evolution

Perhaps one thinks that what should be measured is a disposition to assent to the best scientific understanding of evolution—i.e., the modern synthesis, which consists in the mechanisms of genetic variance, random mutation, and natural selection. If so, then none of the measures of OSI seems to be getting at the right thing either.

The NSF’s “science indicators” battery includes the question “Human beings, as we know them today, developed from earlier species of animals (true or false).” Typically, around 50% select the correct answer (“true,” for those of you playing along at home).

In 2010, a huge controversy erupted when the NSF decided to remove this question and another—“The universe began with a huge explosion”; only around 40% tend to answer this question correctly—from its science literacy scale.  The decision was derided as a “political” cave-in to the “religious right.”

But in fact, whether to include the “evolution” and “big bang” questions in the NSF scale depends on an important conceptual and normative judgment. One can design an OSI scale to be either an “essential knowledge” quiz or a valid and reliable measurement of some unobservable disposition or aptitude. In the former case, all one cares about is including the right questions and determining how many a respondent answered correctly. But in the latter case, correct responses must be highly correlated across the various items; items the responses to which don’t cohere with one another necessarily aren’t measuring the same thing.  If one wants to test hypotheses about how OSI affects individuals’ decisions—whether as citizens, consumers, parents or whathaveyou—then a scale that is merely a quiz and not a valid and reliable latent-variable measure will be of no use: if responses are randomly correlated, then necessarily the aggregate “score” will be randomly connected to anything else respondents do or say.  It is to avoid this result that scholars like Jon Miller have (very appropriately, and with tremendous skill) focused attention on the psychometric properties of the scales formed by varying combinations of science-knowledge items.

Well, if one is trying to form a valid and reliable measure of OSI, the “evolution” and “big bang” questions just don’t belong in the NSF scale. The NSF keeps track of how the top-tier of test-takers—those who score in the top 25% overall—have done on each question. Those top-scoring test takers have answered correctly 97% of the time when responding to “All radioactivity is man-made (true-false)”; 92% of the time when assessing whether “Electrons are smaller than atoms (true-false)”; 90% of the time when assessing whether “Lasers work by focusing sound waves (true-false)”; and 98% of the time when assessing whether “The center of the Earth is very hot (true-false).” But on “evolution” and “big bang,” those same respondents have selected the correct response only 55% and 62% of the time. 

That discrepancy is strong evidence that the latter two questions simply aren’t measuring the same thing as the others. Indeed, scholars who have used the appropriate psychometric tools have concluded that “evolution” and “big bang” are measuring respondents’ religiosity. Moreover, insofar as the respondents who tend to answer the remaining items correctly a very high percentage of the time are highly divided on “evolution” and “big bang,” it can be inferred that OSI, as measured by the remaining items in the NSF scale, just doesn’t predict a disposition to accept the standard scientific accounts of the formation of the universe and the history of life on Earth.

The same is true, apparently, for valid measures of the “habit of mind” conception of OSI.  In general, there is no correlation between “believing” in the best scientific account of evolution and understanding it at even a very basic level. That is, those who say they “believe” in evolution are no more likely than those who say they believe in divine “creation” to know what genetic variance, random mutation, and natural selection mean and how they work within the modern synthesis framework.  How well one scores on a “scientific habit of mind” OSI scale—one that measures one’s disposition to form logical and valid inferences on the basis of observation and measurement—does predict both one’s understanding of the modern synthesis and one’s aptitude for being able to learn it when it is presented in a science course.  But even when they use their highly developed “scientific habits of mind” disposition to gain a correct comprehension of evolution, individuals who commence such a course “believing” in divine creation don’t “change their mind” or abandon their belief.

It is commonplace to cite the relatively high percentage of Americans who say they believe in divine creation as evidence of “low” science literacy or poor science education in the U.S. But ironically, this criticism reflects a poor scientific understanding of the relationship between various measures of science comprehension and beliefs in evolution.

4. Ordinary science intelligence as an intrinsic good

Does all this mean OSI—or at least the “science literacy” and “habits of mind” strategies for measuring it—are unimportant? It could only conceivably mean that if one thought that the sole point of promoting OSI was to make citizens form a particular view on issues like climate change or to make them assent to and not merely comprehend scientific propositions that offend their religious convictions.

To me, it is inconceivable that the value of promoting the capacity to comprehend and participate in scientific knowledge and thought depends on the contribution doing so makes to those goals. It is far from inconceivable that enhancing the public’s OSI (as defensibly defined and appropriately measured) would improve individual and collective decisionmaking.  But I don’t accept that OSI must attain that or any other goal to be worthy of being promoted. It is intrinsically valuable. Its propogation in citizens of a liberal society is self-justifying.

This is the position, I think, that actually motivated Dewey to articulate his “habits of mind” conception of OSI.  True, he dramatically asserted that the “future of our civilization depends upon the widening spread and deepening hold of the scientific habit of mind,” a claim that could (particularly in light of Dewey's admitted attention to the role of liberal education in democracy) reasonably be taken as evidence that he believed this disposition to be instrumental to civic competence. 

But there’s a better reading, I think. “Scientific method,” Dewey wrote, “is not just a method which it has been found profitable to pursue in this or that abstruse subject for purely technical reasons.”

It represents the only method of thinking that has proved fruitful in any subject—that is what we mean when we call it scientific. It is not a peculiar development of thinking for highly specialized ends; it is thinking so far as thought has become conscious of its proper ends and of the equipment indispensable for success in their pursuit.

The advent of science’s way of knowing marks the perfection of a human capacity of singular value.  The habits of mind integral to science enable a person “[a]ctively to participate in the making of knowledge,” which Dewey idenfies as “the highest prerogative of man and the only warrant of his freedom.”

What in Dewey’s view makes the propagation of scientific habits of mind essential to the “future of our civilization,” then, is that only a life informed by this disposition counts as one “governed by intelligence.”  “Mankind,” he writes “so far has been ruled by things and by words, not by thought, for till the last few moments of history, humanity has not been in possession of the conditions of secure and effective thinking.” “And if this consummation” of human rationality and freedom is to be “achieved, the transformation must occur through education, by bringing home to men’s habitual inclination and attitude the significance of genuine knowledge and the full import of the conditions requisite for its attainment.”

To believe that we must learn to measure the attainment of scientific habits of mind in order to perfect our ability to propagate them honors Dewey’s inspiring vision.  To insist that the value of what we would then be measuring depends on the contribution that cultivating scientific habits of mind would make to resolution of particular political disputes, or to the erasure of every last sentimental vestige of the ways of knowing that science has replaced, does not.

Reading list.

 

Saturday
Jan262013

Intense battle for "I [heart] Popper/Citizen of the Liberal Republic of Science" t-shirt

No blog post today. Don't want to distract from the fierce competition to claim the prize in the first "HFC? CYPHIMU!" contest.

Or even the less (for now) fierce battled being waged for the "Cultural Cognition Lab Cat Scan Experiment" t-shirt (Angie is crushing the field, but it's not over until the fat cat yowls).

Friday
Jan252013

Does the cultural affinity of a group's members contribute to the group's collective intelligence?

Likely the 1,000's of you who have already submitted entries into the pending "HFC! CYPHIMU? contest, the winner of which will be awarded a beautiful  "I am a citizen of the Liberal Republic of Science/I ♥ Popper!”  t-shirt (Jon Baron currently sits atop the leader board, btw), are bored and wishing you had something else to do.  

Well how about this?

First, read this fascinating study of "c," a measure of intelligence that can be administered to a collective entity.

 The study was first published in Science (2 yrs ago; fortunately, one of the authors pulled me from the jaws of entropy and  brought the article to my attention only yesterday!).

The authors show that the "collective intelligence" of groups assigned to work on problem tasks admits of reliable measurement by indicators akin to the ones used to measure "individual intelligence." An influential measure of individual intelligence is called the "g factor," or simply g. Thus, the authors call their collective intelligence measure "c factor" or "c."

C is predicted in part by the average intelligence of the group's members and by the intelligence of its smartest (highest-scoring on g) member. That it would be is not so surprising, given existing work on the predictors of group decisionmaking proficiency.

The really cool thing (aside from the proof that it was possible to form a reliable and valid measure of c) was the authors' finding that other interesting individual group-member characteristics also make an important contribution to c. One of these was how many women are in the group (compare with the recent claim by female members of the Senate that part of the reason Congress is so dysfunctional is that aren't enough female members; maybe, maybe not).

Another was the average score of the groups' members on a "social sensitivity" scale. Social sensitivity here measures, in effect, how emotionally perceptive an individual is. The better group members were at "reading" other's intentions, the more cooperatively and productively they engaged one another, the researchers found. This disposition in turn raised the "collective intelligence" of the group -- that is, enabled it to solve more problems more efficiently.  

Not mind-blowingly surprising, either I suppose. But if you think that social science is mainly about establishing mind-blowingly counterintuitive things, you are wrong, and will believe lots of invalid studies. Social science is mainly about figuring out which competing plausible conjectures are true

The conjectures that informed and were supported by this cool study were merely amazingly interesting, amazingly thought provoking, and likely amazingly useful to boot.

Second, now tell me what you think the connection might be between c and cultural cognition.  

As every schoolboy and -girl today knows, "cultural cognition" refers to the tendency of individuals to conform their perceptions of risk and other policy-relevant facts to ones that predominate in their cultural group. CCP studies this phenomenon, using experiments and other empirical methods to identity the mechanisms it comprises.

It is often assumed -- indeed, sometimes I myself and other studying cultural cognition say -- that cultural cognition is a "bias."

In fact, I don't believe this.  I believe instead that cultural cognition is intrinsic, even essential, to human rationality.

The most remarkable feature of human rationality, I'd say, is that individuals are able to recognize what is collectively known.  

Particularly, when a society is lucky enough to recognize that science's way of knowing is the most reliable way to know things, collective knowledge can be immense.  What's known collectively will inevitably outstrip what any individual member of the society can ever comprehend on his or her own--even if that individual is a scientist!

Accordingly, as my colleague Frank Keil has emphasized, individuals can participate in collective knowledge -- something that itself is a condition of there being much of it -- only if they can figure out what's known without being able to understand it. In other words, they must become proficient at knowing who knows what.  The faculty of rational perception involved in being able to figure this out reliably is both essential and amazing.

Well, it turns out that people are simply better at exercising this rational faculty -- of being able to reliably determine who knows what about what-- when they are in groups of people with whom they share a cultural affinity.  Likely they are just better able to "read" such people -- to figure out who actually knows something & who is just bull shitting.

Likely, too, people are better at figuring who knows what about what in these sorts of affinity groups because they are less likely to fight with one another. Conflict will interfere with their ability to exchange knowledge with one another.

Actually, there's no reason to think people can exercise the faculty of perception involved in figuring out who knows what about what only within cultural affinity groups.

On the contrary, there is evidence that culturally diverse groups will actually do better than culturally homogeneous ones if they stay at it long enough to get through an initial rough patch and develop ways of interacting that are suited for discerning who knows what within their particular group.

But in the normal run of things, people probably won't, spontaneously, want to make the effort or simply won't (without a central coordination mechanism) be able to get through the initial friction, and so they will, in the main, tend to learn who knows what about what within affinity groups. That's where cultural cognition comes from.

Generally, too, it works --so long as the science communication environment is kept free of the sorts of contaminants that make culturally diverse groups come to see positions on particular facts -- like whether the earth is heating up or whether the HPV vaccine has health-destroying side effects -- as markers of group membership and loyalty. When that happens, the members of all cultural groups are destined to be collectively dumb as 12 shy of a dozen, and collectively very unwell off.

So now -- my question: do you suppose the cultural affinity of a groups' members is a predictor of c? That is, do you suppose c will be higher in groups whose members are more culturally homogeneous?

Or do you suppose that culturally diverse groups might do better -- even without a substantial period of interaction -- if their individual members "social sensitivity" scores are high enough to offset lack of cultural affinity?

Wouldn't these be interesting matters to investigate? Can you think of other interesting hypotheses?

What's that? You say you won't offer your views on this unless there is the possibility of winning a prize?.... Okay. Best answer will get this wonderful "Cultural Cognition Lab" t-shirt.

Friday
Jan252013

What is the "political economy forecast" for a carbon tax? What are the benefits of such a policy for containing climate change? ("HFC! CYPHIMU?" Episode No. 1)

In the spirit of CCP’s wildly popular feature, “WSMD? JA!,” I’m introducing a new interactive game for the site called: “Hi, fellow citizen! Can you please help increase my understanding?”—or “HFC! CYPHIMU?” The format will involve posting a question or set of related questions relating to a risk or policy-relevant fact that admits of scientific inquiry & then opening the comment section to answers. The questions might be ones that simply occur to me or ones that any of the 9 billion regular subscribers to this blog are curious about. The best answer, as determined by “Lil Hal,”™ a friendly, artificially intelligent robot being groomed for participation in the Loebner Prize competition, will win a “Citizen of the Liberal Republic of Science/I Popper!” t-shirt!

I have a couple of questions  that I’m simply curious about and hoping people can help me to figure out the answers to.

BTW, I’m using “figuring out the answer” as a term of art.

It doesn’t literally mean figuring out the answer! I think questions to which “the answer” can be demonstrably “figured out” tend not to be so interesting as ones that we believe do have answers but that we agree turn on factors that do no admit of direct observation, forcing us to draw inferences from observable, indirect evidence. For those, we have to try to "figure out" the answer in a disciplined empirical way by (1) searching for observable pieces of evidence that we believe are more consistent with one answer than another, (2) combining that evidence with all the other evidence that we have so that we can (3) form a provisional answer (one we might well be willing to act on if necessary) that is itself (4) subject to revision in light of whatever additional evidence of this sort we might encounter.

Accordingly, any response that identifies evidence that furnishes reason for treating potential answers as more likely or less than we might regard them without such evidence counts as “figuring out the answer.” Answers don’t have to be presented as definitive; indeed, if they are, that would likely be a sign that they aren’t helping to “figure out” in the indicated sense!

Oh-- answers that identify multiple sources of evidence, some of which make one answer more likely and some less relative to a competing one, will be awarded "I'm not afraid to live in a complex universe!" bonus points.

Okay, here are my “questions”:

a. If one is assessing the prospects for enacting a carbon tax (or some comparable form of national legislation aimed at reducing U.S. CO2 emissions), how big a factor is public opinion in favor of “doing something to address climate change”?

b. How much of a contribution would a carbon tax—or any other U.S. policy aimed at reducing the impact of atmospheric concentrations of CO2—make to mitigating or constraining global temperature increases or adverse impacts therefrom?

Some explanation for the questions will likely help to elicit answers of the sort I am interested in:

a. If one is assessing the prospects for enacting a carbon tax (or some comparable form of national legislation aimed at reducing U.S. CO2 emissions), how big a factor is public opinion in favor of “doing something to address climate change”?

This is essentially a political economy question.

Researchers who have performed opinion surveys often present evidence that there is growing public support—and possibly even “majority” support—in the U.S. for policies that would constructively address the risks posed by climate change. This conclusion—and for this question, please accept it as correct even if you doubt the methods of these researchers —is in turn treated as support for the proposition that efforts to enact a carbon tax or similar legislation aimed at reducing carbon emissions in the U.S. are meaningfully likely to succeed.

Of course, we all know that “majority public support” does not necessarily translate in any straightforward sense into adoption of policies. If it did, the U.S. would have enacted “gun control” measures in the 1970s or 1980s much stricter than the ones President Obama is now proposing. We’d have a muscular regime of campaign-finance regulations. We wouldn’t have massive farm subsidies, and tax loopholes that enable major corporations to pay (literally) no U.S. income tax. Etc.

The “political economy climate” is complex—if not as complex as the natural one, then pretty close! Forecasts of what is likely or possible depend on the interaction of many variables, of which “public support” is only one.

So, can you please help me increase my understanding? What is the political-economy model that informs the judgment of those who do believe increased public support for “action on climate change” meaningfully increase the likelihood of a carbon tax? What are the mechanisms and practical steps that will translate this support into enactment of policy?

b. How much of a contribution would a carbon tax—or any other U.S. policy aimed at reducing the impact of atmospheric concentrations of carbon—make to mitigating or constraining global temperature increases or adverse impacts therefrom?

This, obviously, is a “climate science” question, primarily, although it might also be a political economy question.

The motivation behind the question consists of a couple of premises. One is that the U.S. is not the only contributor to atmospheric CO2; indeed, China has apparently overtaken us as the leader, and developing countries, most importantly India, will generate more and more greenhouse gases (not just CO2, but others, like Freon) as they seek to improve conditions of living for their members.

The second is scientific evidence relating to the climate impact of best-case scenarios on future atmospheric CO2 levels. Such evidence, as I understand it (from studies published in journals like Nature and the Proceedings of the National Academy of Sciences) suggests that earlier scientific projections of the contribution that CO2 reductions and ceilings can make to forestalling major, adverse impacts were too optimistic. Even if the U.S. stopped producing any CO2—even if all nations in the world did—there’d still be catastrophic effects as a result of climate change.

As an editorial in Nature put it,

The fossil fuels burned up so far have already committed the world to a serious amount of climate change, even if carbon emissions were somehow to cease overnight. And given the current economic turmoil, the wherewithal to adapt to these changes is in short supply, especially among the world's poor nations. Adaptation measures will be needed in rich and poor countries alike — but those that have grown wealthy through the past emission of carbon have a moral duty to help those now threatened by that legacy.

The latest scientific research suggests that even a complete halt to carbon pollution would not bring the world's temperatures down substantially for several centuries. If further research reveals that a prolonged period of elevated temperatures would endanger the polar ice sheets, or otherwise destabilize the Earth system, nations may have to contemplate actively removing CO2from the atmosphere. Indeed, the United Nations Intergovernmental Panel on Climate Change is already developing scenarios for the idea that long-term safety may require sucking up carbon, and various innovators and entrepreneurs are developing technologies that might be able to accomplish that feat. At the moment, those technologies seem ruinously expensive and technically difficult. But if the very steep learning curve can be climbed, then the benefits will be great.

I’m curious, then, what is the practical understanding of how a carbon tax or any other policy to reduce CO2 emissions in the U.S. will contribute to “doing something about climate change.”

Am I incorrect to think that such steps by themselves will not contribute in any material way?

If so, is the idea that U.S. efforts to constrain emissions will spur other nations to limit their output? What is the international political economy model for that expectation?

Even if other nations do enact measures that make comparable contributions to limiting atmospheric CO2 emissions, how much of a difference will that make given, as the Nature editorial puts it, “[t]he latest scientific research suggests that even a complete halt to carbon pollution would not bring the world's temperatures down substantially for several centuries?”

Thanks to anyone who can help make me smarter on these issues!

Monday
Jan212013

A case study: the HPV vaccine disaster (Science of Science Communication Course, Session 1)

This semester I'm teaching a course entitled the Science of Science Communication. I've posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

 

1. The HPV vaccine disaster

HPV stands for human papilomavirus. It is a sexually transmitted disease.

The infection rate is extremely high: 45% for women in their twenties, and almost certainly just as high for men, in whom the disease cannot reliably be identified by test.

The vast majority of people who get HPV experience no symptoms.

But some get genital warts.

And some get cervical cancer.

Some of them--over 3500 women per yr in U.S. -- die. 

In 2006, the FDA approved an HPV vaccine, Gardasil, manufactured by the New Jersey pharmaceutical firm Merck. Gardasil is believed to confer immunity to 70% of the HPV strains that cause cervical cancer. The vaccine was approved only for women, because only in women had HPV been linked to a “serious disease” (cervical cancer), a condition of eligibility for the fast-track approval procedures that Merck applied for. Shortly after FDA approval, the Center for Disease Control recommended universal vaccination for adolescent girls and young women.

The initial public response featured intense division. The conflict centered on proposals to add the vaccine—for girls only—to the schedule of mandatory immunizations required for middle school enrollment. Conservative religious groups and other mandate opponents challenged evidence of the effectiveness of Gardasil and raised concerns about unanticipated (or undisclosed) side-effects. They also argued that vaccination would increase teen pregnancy and other STDs by investing teenage girls with a false sense of security that would lull them into engaging in unprotected, promiscuous sex. Led by women’s advocacy groups, mandate proponents dismissed these arguments as pretexts, motivated by animosity toward violation of traditional gender norms.

In 2007, Texas briefly became the first state with a mandatory vaccination requirement when Governor Perry—a conservative Republican aligned with the religious right—enacted one by executive order. When news surfaced that Perry had accepted campaign contributions from Merck (which also had hired one of Perry’s top aids to lobby him), the state legislature angrily overturned the order.

Soon thereafter, additional stories appeared disclosing the major, largely behind-the-scene operation of the pharmaceutical company in the national campaign to enact mandatory vaccination programs.  Many opinion leaders who previously had advocated the vaccine now became critics of the company, which announced that it was “suspending” its “lobbying” activity. Dozens of states rejected mandatory vaccination, which was implemented in only one, Virginia, where Merck had agreed to build a vaccine-manufacturing facility, plus the District of Columbia.

Current public opinion is characterized less by division than by deep ambivalence. Some states have enacted programs subsidizing voluntary vaccination, which in other states is covered by insurance and furnished free of cost to uninsured families by various governmental and private groups. Nevertheless, “uptake” (public health speak for vaccination rate) among adolescent girls and young women is substantially lower here (32%) than it is in nations with inferior public health systems, including ones that likewise have failed to make vaccination compulsory (e.g., Mexico, 67%, and Portugal, 81%). The vaccination rate for boys, for whom the FDA approved Gardasil in 2009, is a dismal 7%.

2. What’s the issue? (What “disaster”?)

The American pubic tends to have tremendous confidence in the medical profession, and is not hostile to vaccinations, mandatory or otherwise (I’ll say more about the “anti-vaccine movement” another time but for now let’s just say it is quite small). When the CDC recommended vaccination for H1N1 in December 2009, for example, polls showed that a majority of the U.S. population intended to get the vaccine, which ran out before the highest-risk members of the population—children and the elderly—were fully inoculated. In a typical flu season, uptake rates for children usually exceed 50%.

The flu, of course, is not an STD. But Hepatitis B is. The vast majority of states implemented mandatory HBV vaccination programs—without fuss, via administrative directives issued by public health professionals—after the CDC recommended universal immunization of infants in 1995. Like the HPV vaccine, the HBV vaccine involves a course of two to three injections.  National coverage for children is over 90%.

There are (it seems to me!) arguments that a sensible sexually active young adult could understandably, defensibly credit for forgoing the HPV vaccination, and that reasonable parents and reasonable citizens could for not having the vaccine administered to their children and mandated for others’. But the arguments are no stronger than—not not at all different from—the ones that could be made against HBV vaccination. They don’t explain, then, why in the case of the HPV vaccine the public didn’t react with its business-as-usual acceptance when public health officials recommended that children and young adults be vaccinated.

What does? That question needs an answer regardless of how one feels about the HPV vaccine or the public reaction to it—indeed, in order even to know how one should feel about those matters.

3. A polluted science communication environment

The answer—or at least one that is both plausible and supported by empirical evidence—is the contamination of the “science communication environment.”  People are generally remarkably proficient at figuring out who knows what; they are experts in identifying who the experts are and reliably discerning what those with expertise counsel them to do. But that capacity—that faculty of reasoning and perception—becomes disabled (confused, unreliable) when an empirical fact that admits of scientific investigation provokes controversy among groups united by shared values and perspectives.

Most of us have witnessed this situation via casual observation; scholars who carefully looked at parents trying to figure out what to think about the HPV vaccine saw that they were in that situation. They saw, for example, the mixture of shame and confusion experienced by an individual mother who acknowledged (admitted; confessed?) in the midst of a luncheon conversation with scandalized friends (also mothers) that she had allowed her middle-school daughter to be vaccinated (“what--why? . . .”; “Well, because that’s what the doctor advised . . . .” “Then, you had better find a new doctor, dear . . . . ”).

Scholars using more stylized but more controlled methods to investigate how people form perceptions of the HPV vaccine report the same thing.  In one, researchers tested how exposure to two versions of a fictional news articles affected public support for mandatory HPV vaccination.  Both versions described (real) support for mandatory vaccination by public health experts. But one, in addition, adverted without elaboration to “medical and political conflict” surrounding a mandatory-vaccine proposal. The group exposed to the “controversy” version of the report were less likely to support the proposal—indeed, on the whole were inclined to oppose it—than those in the “no controversy” group. This effect, moreover, was as strong among subjects inclined to support mandatory vaccination policies generally as among those who weren’t/

The study result admits (I admit!) of more than one plausible explanation. But one is that being advised the matter was “politically controversial” operated as a cue that generated hesitation to credit evidence of expert opinion among people otherwise disposed to use it as their guide on public health issues.

Another study done by CCP bolsters this interpretation. That one assessed how members of the public with diverse cultural outlooks assessed information about the risks and benefits of HPV vaccination. Subjects of opposing worldviews were inclined to form opposing beliefs when evaluating information on the risks and benefits of the vaccine. Yet the single most important factor for all subjects, the study found, was the position taken by “public health experts.” Sensibly & not surprisingly, people of diverse values share the disposition to figure out what credible, knowledge experts are saying on things that they themselves lack the expertise to understand but that are important for the wellbeing of themselves and others.

Whether the subjects viewed experts as credible and trustworthy, however, was highly sensitive to their tacit perception of the experts’ cultural values. This didn’t actually have much impact on subjects’ risk perceptions--unless they were exposed to alignments of arguments and (culturally identifiable) experts that gave them reason to think the issue was one that pit members of their group against another in a pattern that reinforced the subjects’ own cultural predispositions toward the HPV vaccine. That’s when the subjects became massively polarized.

That’s the situation, moreover, that people in the world saw, too. From the moment culturally diverse citizens first tuned in, the signal they were getting on the science-communication frequency of their choice was that “they say this; we, on the other hand, really know that.” 

Under these conditions, the manner in which people evaluate risk is psychologically equivalent to the one in which fans of opposing football teams form their impressions of whether the receiver who caught the last-second, hail-Mary pass was out of bounds or in.  Anyone who thinks this is the right way to for people to engage information of consequence to their collective well-being—or who thinks that people actually want to form their beliefs this way—is a cretin, no matter what he or she believes about the HPV vaccine.

4. An avoidable “accident”

There was nothing necessary about the HPV vaccine disaster.  The HPV vaccine took a path different from the ones travelled by the H1N1 vaccine in 2009, and by the HBV vaccine in 1995 to the present, as a result of foreseeably bad decisions, stemming from a combination of strategic behavior, gullibility, and collective incapacity.

Information about the risks and benefits of HPV vaccine came bundled with facts bearing culturally charged resonances. It was a vaccine for 11-12 year old girls to prevent contraction of a sexually transmitted disease.  There was a proposal to make the vaccine mandatory as a condition of school enrollment.  The opposing stances of iconic cultural antagonists were formed in response to (no doubt to exploit the conflictual energy of) the meanings latent in these facts—and their stances became cues for ordinary, largely apolitical individuals of diverse cultural identities.

These conditions were all an artifact of decisions Merck self-consciously made about how to pursue regulatory approval and subsequent marketing of Gardasil. It sought approval of the vaccine for girls and young women only in order to invoke “fast track” consideration by the FDA. It thereafter funded—orchestrated, in a manner that shielded its own involvement—the campaign to promote adoption of mandatory vaccination programs across the states.  To try to “counterspin” the predictable political opposition to the vaccine, it hired an inept sock puppet—“Oops!”—whose feebly scripted performance itself enriched the cultural resources available to those seeking to block the vaccine.

Had Merck not sought fast-track approval and pushed aggressively for quick adoption of mandatory vaccination programs, the FDA would have approved the vaccine for males and females just a few years later, insurance companies plus nongovernmental providers would have furnished mechanisms for universal vaccination sufficient to fill in any gaps in stated mandates, which would have been enacted or not by state public health administrators largely removed from politics. Religious groups—which actually did not oppose FDA approval of the HPV vaccine but only the proposal to mandate it—wouldn’t have had much motivation or basis for opposing such a regime.

As a result, parents would have learned about the risk and benefits of the HPV vaccine from medical experts of their own choosing—ones chosen by them, presumably, because they trusted them—without the disorienting, distracting influence of cultural conflict. They would have learned about it, in other words, in the same conditions as the ones in which they now encounter the same sort of information on the HBV and other vaccines. That would have been good for them.

But it wouldn’t have been good for Merck. For by then, GlaxoSmithKline’s alternative vaccine would have been ready for agency approval, too, and could have competed free of the disadvantage of what Merck hoped would be a nationwide set of contracts to supply Gardasil to state school systems.

Is this 20/20 hindsight? Not really; it is what many members of the nation’s public health community saw at the time. Many who supported approval of Gardasil still opposed mandatory vaccination, both on the grounds that it was not necessary for public health and likely to back fire. Even many supporters of such programs—writing in publications such as the New England Journal of Medicine—conceded that “vaccination mandates are aimed more at protecting the vaccinee than at achieving herd immunity”—the same economic-subsidy rationale that was deemed decisive for mandating HPB vaccination.

These arguments weren’t rejected so much as never even considered meaningfully. Those involved in the FDA and CDC approval process weren’t charged with and didn’t have the expertise to evaluate how the science communication environment would be affected by the conditions under which the vaccine was introduced.

So in that sense, the disaster wasn’t their “fault.” It was, instead, just a foreseeable consequence of not having a mechanism in our public health system for making use of the intelligence and judgment at our disposal for dealing with science communication problems that are actually foreseen.

Whose fault will it be if this happens again?

5. Wasted knowledge

The likely “public acceptance” of an HPV vaccine was something that public health researchers had been studying for years before Gardasil was approved. But the risk that public acceptance would be undermined by a poisonous science communication environment was not something that those researchers warned anyone about. 

Instead, they reported (consistently, in scores of studies) that acceptance would turn on parents’ perceptions of the cost of the vaccine, its health benefits, and its risks, all of which would be shaped decisively by parents’ deference to medical expert opinion. 

This advice was worse than banal; it was disarmingly misleading. Public health researchers anticipated that a vaccine would be approved only if effective and not unduly risky, and that it would be covered by insurance and economically subsidized by the government. Those were reasonable assumptions. What wasn’t reasonable was the fallacious conclusion (present in study after study) that therefore all public health officials would have to do to promote “public acceptance” was tell people exactly these things. 

Things don’t work that way. And I’m not announcing any sort of late-breaking, hot-off- the-press-of-Nature-or-Science-or-PNAS news when I say that.

Social psychology and related disciplines are filled with knowledge about the conditions that determine how ordinary, intelligent people make sense of information about risk and identify who they can trust & when to give them expert advice.  The public health literature is filled with evidence of the importance of social influences on public perceptions of risks—e.g., those associated with unsafe sex and smoking. 

That knowledge could have been used to generate insight that public health officials could have used to forecast the impact of introducing Gardasil in the way it was introduced.

It wasn’t. That scientific knowledge on science communication was wasted. As a result, much of the value associated with the medical science knowledge that generated Gardasil has been wasted too. 

Session reading list.

Sunday
Jan202013

What inferences can be drawn from *empirical evidence* about the science-communication impact of using the term "climate change denier"?

Andy "dotearth" Revkin, the Hank Aaron of environmental-science journalism, posted this question after a colloquy with other thoughtful science communicators. Andy apparently was moved to ask it after observing a talk on climate change by "science guy" Bill Nye.

Here is my answer. I invite others to supplement!

As is so for climate change, sometimes positions on a risk or other policy-consequential fact become publicly recognizable symbols of membership in opposing cultural groups. When that happens, members of those groups are likely to judge the expertise of any science communicator who is addressing that risk based on whether they see him or her as aligned with or hostile to their own group.  E.g., see  

1. Corner, A., Whitmarsh, L., & Xenias, D. Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Climatic Change, 1-16. doi: 10.1007/s10584-012-0424-6

2. Kahan, D., Braman, D., Cohen, G., Gastil, J., & Slovic, P. (2010). Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law and Human Behavior, 34(6), 501-516.

3. Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural Cognition of Scientific Consensus. J. Risk Res., 14, 147-174.

This helps explain why even people who are pro-science & who believe science should inform public policy generally can polarize on a policy-consequential fact that admits of scientific evidence (an effect that persists even among highly science literate members of opposing groups).

Accordingly, whether or not he "alienates" anyone, I think when someone like Bill Nye speaks about "climate change deniers" he creates the foreseeable risk that many ordinary people, including many reflective and open-minded ones, will not view him as credible. "Climate denial," for them,  is likely to be a cue that causes them to perceive Nye (perhaps rightly, but perhaps wrongly) as aligned with a cultural group that harbors animosty toward their own. They will thus not view him as a genuine (or at least not as a trustworthy) "expert" but instead seem him as a partisan.  Consistent with Brendan Nyhan's recent study, exposure to Nye's advocacy might even intensify the strength with which ordinary people are committed to the position he is attacking.

These are conjectures, extrapolations from the results of studies that are in effect models of how people process information in such settings.  One could test my view by taking a recording of Nye's remarks and showing it to a general population sample. If those who observed him became more culturally polarized relative to a control group who didn't see Nye's remarks, that would be evidence supportive of the hypothesis I just offered, whereas if they didn't polarize or even started to converge relative to the control group, that would be evidence the other way.  I'm happy to advise or collobarate w/ anyone who would like to do the study (including Bill Nye, provided he gives me one of his cool ties).

Such a test would still only be a model, btw, from which conclusions about how to talk to whom about what (assuming one actually wants to have a meaningful exchange of ideas with someone) would still depend on inferences reflecting information, evidence, beliefs, etc. independent of the study itself. That's the way things are, always and on everything that one can study with empirical methods (this is obvious but it bears repeating -- over & over & over -- because many people have the unscientific view that scientific studies "prove/disprove" propositions & "demonstrate" the wisdom of courses of action in some way that obviates the need to rely on judgment and reason, not to mention the need ever to consider any more evidence ever again).

Tuesday
Jan152013

Yale University "Science of Science Communication" course

Am teaching this course this semester:

PSYC 601b. The Science of Science Communication. The simple dissemination of valid scientific knowledge does not guarantee it will be recognized by nonexperts to whom it is of consequence. The science of science communication is an emerging, multidisciplinary field that investigates the processes that enable ordinary citizens to form beliefs consistent with the best available scientific evidence, the conditions that impede the formation of such beliefs, and the strategies that can be employed to avoid or ameliorate such conditions. This seminar will survey, and make a modest attempt to systematize, the growing body of work in this area. Special attention will be paid to identifying the distinctive communication dynamics of the diverse contexts in which nonexperts engage scientific information, including electoral politics, governmental policymaking, and personal health decision making. 

Here's a "manifesto" of sorts, which comes from course syllabus:

1. Overview. The most effective way to communicate the nature of this course is to identify its motivation.  We live in a place and at a time in which we have ready access to information—scientific information—of unprecedented value for our individual and collective welfare. But the proportion of this information that is effectively used—by individuals and by society—is shockingly small. The evidence for this conclusion is reflected in the manifestly awful decisions people make, and outcomes they suffer as a result, in their personal health and financial planning. It is reflected too not only in the failure of governmental institutions to utilize the best available scientific evidence that bears on the safety, security, and prosperity of its members, but in the inability of citizens and their representatives even to agree on what that evidence is or what it signifies for the policy tradeoffs acting on it necessarily entails.

This course is about remedying this state of affairs. Its premise is that the effective transmission of consequential scientific knowledge to deliberating individuals and groups is itself a matter that admits of, and indeed demands, scientific study.  The use of empirical methods is necessary to generate an understanding of the social and psychological dynamics that govern how people (members of the public, but experts too) come to know what is known to science. Such methods are also necessary to comprehend the social and political dynamics that determine whether the best evidence we have on how to communicate science becomes integrated into how we do science and how we make decisions, individual and collective, that are or should be informed by science.

Likely you get this already: but this course is not simply about how scientists can avoid speaking in jargony language when addressing the public or how journalists can communicate technical matters in comprehensible ways without mangling the facts.  Those are only two of many "science communication problems," and as important as they are, they are likely not the ones in most urgent need of study (I myself think science journalists have their craft well in hand).  Indeed, in addition to dispelling (assaulting) the fallacy that science communication is not a matter that requires its own science, this course will self-consciously attack the notion that the sort of scientific insight necessary to guide science communication is unitary, or uniform across contexts—as if the same techniques that might help a modestly numerate individual understand the probabilistic elements of a decision to undergo a risky medical procedure were exactly the same ones needed to dispel polarization over climate science! We will try to individuate the separate domains in which a science of science communication is needed, and take stock of what is known, and what isn’t but needs to be, in each.

The primary aim of the course comprises these matters; a secondary aim is to acquire a facility with the empirical methods on which the science of science communication depends.  You will not have to do empirical analyses of any particular sort in this class. But you will have to make sense of many kinds.  No matter what your primary area of study is—even if it is one that doesn’t involve empirical methods—you can do this.  If you don’t yet understand that, then perhaps that is the most important thing you will learn in the course. Accordingly, while we will not approach study of empirical methods in a methodical way, we will always engage critically the sorts of methods that are being used in the studies we examine, and from time to time I will supplement readings with more general ones relating to methods.  Mainly, though, I will try to enable you to see (by seeing yourself and others doing it) that apprehending the significance of empirical work depends on recognizing when and how inferences can be drawn from observation: if you know this, you can learn whatever more is necessary to appreciate how particular empirical methods contribute to insight; if you don’t know this, nothing you understand about methods will furnish you with reliable guidance (just watch how much foolishness empirical methods separated from reflective, grounded inference can involve).

Will post course info & weekly reading lists (not readings themselves, sadly, since they consist mainly of journal articles that it would violate Yale University licensing agreement for me to distribute hither & yon; I certainly don't want the feds coming down on me for the horrible crime of making knowledge freely available!)

First session was yesterday & topic was HPV vaccine. It was great class.  Plan to post some reflections on reading & discussion soon. But have to go running now!

 

Friday
Jan112013

Amazingly cool & important article on virulence of ideologically motivated reasoning

Political psychologist Brendan Nyhan  and his collaborators Jason Reifler & Peter Ubel just published a really cool paper in Medical Care entitled “The Hazards of Correcting Myths About Health Care Reform.” It shows just how astonishingly resistant the disease of ideologically motivated reasoning is to treatment with accurate information. And like all really good studies, it raises some really intersting questions.

NRU conducted an experiment on the effect of corrections of factually erroneous information originating from a partisan source. Two groups of subjects got a news article that reported on false assertions by Sarah Palin relating to the role of “death panels” in the Obamacare national health plan.  One group received in addition a news story that reported that “nonpartisan health care experts have concluded that Palin was wrong.”  NRU then compared the perceptions of the two groups.

Well, one thing they found is that the more subjects liked Palin, the more likely they were to believe Palin’s bogus “death panel” claims.  Sure, not a big surprise.

They also found that the impact of being showing the “correction” was conditional on how much subjects liked Palin: the more they liked her, the less they credited the correction. Cool, but again not startling.

What was mind-blowing, however, was the interaction of these effects with political knowledge.  As subjects became more pro-Palin in their feelings, high political knowledge subjects did not merely discount the “correction” by a larger amount than low political knowledge ones. Being exposed to the “nonpartisan experts say Palin wrong” message actually made high-knowledge subjects with pro-Palin sentiments credit her initially false statements even more strongly than their counterparts in the “uncorrected” or control condition!

The most straightforward interpretation is that for people who have the sort of disposition that “high political knowledge” measures,  the “fact check”-style correction itself operated as a cue that the truth of Palin's statements was a matter of partisan significance, thereby generating unconscious motivation in them to view her statements as true.

That’s singularly awful.

There was already plenty of reason to believe that just bombarding people with more and more “sound information” doesn’t neutralize polarization on culturally charged issues like climate change, gun control, nuclear power, etc. 

There was also plenty of reason to think that individuals who are high in political knowledge are especially likely to display motivated reasoning and thus to be especially resistant to a simple “sound information” bombardment strategy.

But what NRU show is that things have become so bad in our polarized society that trying to correct partisan-motivated misperceptions of facts can actually make things worse!  Responding to partisan misinformation with truth is akin to trying to douse a grease fire with water!

But really, I’d say that the experiment shows only potentially how bad things can get.

First, the NRU experimental design, like all experimental designs, is a model of real-world dynamics.  I’d say the real-world setting it is modeling is one in which an issue is exquisitely fraught; Palin & Obamacare are each flammable enough on their own, so when you mix them together you’ve created an atmosphere just a match strike away from an immense combustion of ideologically motivated reasoning.

Still, there is plenty of reason to believe that there are conditions, issues, etc.  like that in the world. So the NRU model gives us reason to be very wary of rushing around trying to expose “lies” as a strategy for correcting misinformation.  At least sometimes, the the study cautions, you could be playing right into the misinformer’s hands.

Actually, I think that this is the scenario on the mind of those who’ve reacted negatively to the proposed use of climate change “truth squads”—SWAT teams of expert scientists who would be deployed to slap down every misrepresentation made by individuals or groups who misrepresent climate science.  The NRU study gives more reason to think those who didn’t like this proposal were right to think this device would only amplify the signal on which polarization feeds.

Second, interpreting NRU, however, depends in part on what is being measured by “political knowledge.”

Measured with a civics quiz, essentially, “political knowledge” is well-known to amplify partisanship.

But why exactly?

The usual explanation is that people who are “high” in political knowledge literally just know more and hence assign political significance to information in a more accurate and reliable way. This by itself doesn’t sound so bad. People’s political views should reflect their values, and if getting the right fit requires information, then the "high" political knowledge individuals are engaged in better reasoning. Low-knowledge people bumble along and thus form incoherent views.

But that doesn’t seem satisfying when one examines how political knowledge can amplify motivated reasoning.  When people engage in ideologically motivated reasoning, they give information the effect that gratifies their values independently of whether doing so generates accurate beliefs.  Why would knowing more about political issues make people reason in this biased way?

Another explanation would be that “political knowledge” is actually measuring the disposition to define oneself in partisan terms. In that case, it would make sense to think of high knowledge as diagnostic or predictive of vulnerability to ideologically motivated reasoning. People with strong partisan identities are the ones who experience strong unconscious motivation to use what they know in a way that reinforces conclusions that are ideologically congenial.

Moreover, in that case, being low in “political knowledge” arguably makes one a better civic reasoner. Because one doesn’t define oneself so centrally with respect to one’s ideology or party membership, one gives information an effect that is more reliably connected to its connection to truth.  Indeed, in NRU the “low knowledge” subjects seemed to be responding to “corrections” of misinformation in a normatively more desirable way—assuming what we desire is the reliable recognition and open-minded consideration of valid evidence. 

I would say that the “partisan identity” interpretation of political knowledge is almost certainly correct, but that the “knows more, reasons better” interpretation is likely correct too.  The theoretical framework that informs cultural cognition asserts that it is rational for people to regard politically charged information in a manner that reliably connects their beliefs to those that predominate in their group because the cost of being “out of synch” on a contentious matter is likely to be much higher than the cost of being “wrong”—something that on most political issues is costless to individuals, given how little impact their personal beliefs have on policymaking.  If so, then, we should expect people who “know more” and “reason better” to be more reliable in “figuring out” what the political significance of information is—and thus more likely to display motivated reasoning.

In support of this, I’d cite two CCP studies. The first showed that individuals who have higher levels of science comprehension are more likely to polarize on climate change. The second shows that individuals who are higher in “cognitive reflection,” as measured by the CRT test, show an even greater tendency to engage in culturally or ideologically motivated reasoning when evaluating information.

These studies belie an interpretation of NRU that suggests that “low knowledge” subjects are reasoning in a higher quality way because they are not displaying motivated cognition.  In truth, higher quality reasoning makes motivated reasoning worse.

Because it is rational for people to fit their perceptions of risk and other policy-consequential facts to their identities (indeed, because this is integral to their capacity to participate in collective knowledge), the way to avert political conflict over policy-relevant science isn't to flood the political landscape with "information." It is to protect the science communication enviroment from the antagonistic social meanings that are the source of the conflict between the individual interest that individuals have in forming and expressing commitment to particular cultural groups and the collective one that the members of all such groups have in converging on the best available evidence of how to secure their common ends.

What gives me pause, though, is an amazingly good book that I happen to be reading right now: The Ambivalent Partisan by Lavine, Johnston & Steenbergen. LJS reports empirical results identifying a class of people who don’t define themselves in strongly partisan terms, who engage in high quality reasoning (heuristic and systematic) when examining policy-relevant evidence, and who are largely immune to motivated reasoning.  

That would make these ambivalent partisans models of civic virtue in the Liberal Republic of Science. I suppose it would mean too that we ought to go on a crash program to study these people and see if we could concoct a vaccine, or perhaps a genetic modification procedure, to inculcate these dispositions in others. And more seriously still (to me at least!), such findings might suggest that I need to completely rethink my understanding of  cultural cognition as integral to rational engagement with information at an individual level. . . . I will give a fuller report on LJS in due course.

I can report for now, though, that NRU & LJS have both enhanced my knowledge and made me more confused about things I thought I was figuring out. 

Important contributions to scholarly conversation tend to have exactly that effect!

 References

Delli Carpini, M.X. & Keeter, S. What Americans Know About Politics and Why It Matters. (Yale University Press, New Haven; 1996).

Hovland, C.I. & Weiss, W. The Influence of Source Credibility on Communication Effectiveness. Public Opin Quart 15, 635-650 (1951-52).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Ideology, Cognitive Reflection, and Motivated Cognition, CCP Working Paper No. 107 (Nov. 29, 2012).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Lavine, H., Johnston, C.D. & Steenbergen, M.R. The ambivalent partisan : how critical loyalty promotes democracy. (Oxford University Press, New York, NY; 2012).

Nyhan, B., Reifler, J. & Ubel, P.A. The Hazards of Correcting Myths About Health Care Reform. Medical Care Publish Ahead of Print, 10.1097/MLR.1090b1013e318279486b (9000).

Zaller, J.R. The Nature and Origins of Mass Opinion. (Cambridge Univ. Press, Cambridge, England; 1992).

 

Thursday
Jan102013

An interesting story: on whether "strengthening self-defense law deters crime"


Scholars in the social sciences and related disciplines (including law) often circulate “working papers” –basically, rough drafts of their articles. The main reason is to give other scholars a chance to read and offer comments, which authors can then use to improve their work.

Scholars value the chance to make their papers as strong as possible before submitting them for peer review. And they for sure don’t want to end up publishing something that later is shown to be flawed.

In response to a recent blog, a commenter called my attention to a draft paper that reports the results of a study of “stand your ground” laws. These laws provide that a person who honestly and reasonably believes that he or she faces an imminent threat of death or great bodily harm doesn’t have to retreat before resorting to deadly force in self-defense.  Numerous states have enacted such laws in the last decade in response to a campaign orchestrated by the National Rifle Association to promote their adoption.

The study investigates a really interesting question: what effect did enacting a“stand your ground” law have in states that had previously imposed a “duty to retreat”—ones, in other words, that before had restricted the right to use deadly force to circumstances in which a person could not have been expected to escape an attack by fleeing? As the authors (economists, by training) put it:

These laws alter incentives in two important ways. First, the laws reduce the expected cost of using lethal force. . . . In addition, the laws increase the expected cost of committing violent crime, as victims are more likely to respond by using lethal force.  The purpose of our paper is to examine empirically whether people reasoned to these incentives, and thus whether the laws lead to an increase in homicide, or to deterrence of crime more generally.

Using multivariate regression analysis, the study found that homicides went up in these states. The “stand your ground” standard, in other words, makes people less safe, not more.

This finding has received considerable media attention, in large part because a debate has been raging about the impact of “stand your ground” laws on homicide rates since the murder of Trayvon Martin in Florida last spring.

There’s only one problem. The majority of the states that enacted “stand your ground” laws already permitted citizens to use deadly force to repel a lethal attack regardless of the possibility of safe retreat.  The law in these states didn’t change when they enacted the statutes.

The paper lists 21 states in which it says enactment of “stand your ground laws” “remove[d] [the] duty to retreat ... outside the home.” 

Not true—or less than 50% true, in any case.


I’ve prepared a list (click on the thumbnail to inspect it) that identifies pre-“stand your ground” law judicial decisions (self-defense is one of those legal doctrines that traditionally has gotten worked out by judges) in 11 of these states. They all indicate clearly that a person needn’t retreat before resorting to deadly force to repel a potentially lethal assault in a public place. (Do realize my research wasn't exhaustive, as it would be if I were writing an academic paper as opposed to a blog post!)

But hey, put scholarly errors aside for a second. There’s an interesting story here, and I can’t resist sharing it with you!

The traditional “common law” doctrine of self-defense that U.S. states inherited from England was that a person had a duty to “retreat to the wall” before using deadly force against another. But in the late 19th Century and early 20th, many U.S. states in the South and West rejected this position and adopted what became known as the “true man” doctrine. 

The idea was that that a man whose character is true—that is, straight, not warped; as in “true beam”—appropriately values his own liberty and honor more than the life of a person who wrongfully attacks him in a public place.  Punishing an honorable man for behaving honorably, one of the early authorities explained, is contrary to the“ 'the tendency of the American mind' ” (Beard v. United States, 158 U.S. 550, 561 (1895) (Harlan, J) (quoting Erwin v. State, 29 Ohio St. 186, 193, 199 (1876)).  

 “It is true, human life is sacred, but so is human liberty," another court explained (State v. Bartlett, 71 S.W. 148, 152 (Mo. 1902)).

One is as dear in the eye of the law as the other, and neither is to give way and surrender its legal status in order that the other may exclusively exist, supposing for a moment that such an anomaly to be possible. In other words, the wrongful and violent act of one man shall not abolish or even temporarily suspend the lawful and constitutional right of his neighbor. And this idea of the nonnecessity of retreating from any locality where one has the right to be is growing in favor, as all doctrines based upon sound reason inevitably will . . . . [No] man, because he is the physical inferior of another, from whatever cause such inferiority may arise, is, because of such inferiority, bound to submit to a public horsewhipping. We hold it a necessary self-defense to resist, resent, and prevent such humiliating indignity, — such a violation of the sacredness of one’s person, — and that, if nature has not provided the means for such resistance, art may; in short, a weapon may be used to effect the unavoidable necessity.

Yikes! Many jurists and commentators, particularly in the Northeast, found this reasoning repulsive.  “The ideal of the[] courts” that have propounded the “true man” doctrine, explained Harvard Law Professor Jospeph Beale in 1903 (Retreat from a Murderous Assault, 16 Harv. L. Rev. 567 (1903),

is found in the ethics of the duelist, the German officer, and the buccaneer. . . .  The feeling at the bottom of the [the rule] is one beyond all law; it is the feeling which is responsible for the duel, for war, for lynching; the feeling which leads a jury to acquit the slayer of his wife’s paramour; the feeling which would compel a true man to kill the ravisher of his daughter.  We have outlived dueling, and we deprecate war and lynching; but it is only because the advance of civilization and culture has led us to control our feelings by our will. . . A really honorable man, a man of truly refined and elevated feeling, would perhaps always regret the apparent cowardice of a retreat, but he would regret ten times more, after the excitement of the contest was past, the thought that he had the blood of a fellow-being on his hands.

This debate was realllllllly bitter and acrimonious.  I suppose the two sides disagreed about the impact of the “true man” doctrine on homicide rates. But obviously this conflict was a cultural one between groups—lets call them hierarchical individualists and egalitarian communitarians—both of which understood courts’ adoption or rejection of the “true man” doctrine as adjudicating the value of their opposing visions of virtue and the good society.

Well, along came the amazing super-liberal superhero Justice Holmes to save the day! In a 1921 decision called Brown v. United States, 256 U.S. 335, the U.S. Supreme Court had to figure out whether the federal self-defense standard—which like defenses generally was not codified in any statute—imposed a “duty to retreat.” Holmes concluded it didn’t. But his explanation why didn’t sound at all like what the Western and Southern “true man” courts—or anyone else—was saying in the “true man” controversy.

The law has grown, and even if historical mistakes have contributed to its growth it has tended in the direction of rules consistent with human nature. . . .  Detached reflection cannot be demanded in the presence of an uplifted knife.  Therefore in this Court, at least, it is not a condition of immunity that one in that situation should pause to consider whether a reasonable man might not think it possible to fly with safety or to disable his assailant rather than to kill him.

We can’t punish the poor bastard, Holmes was saying, not because he bravely defended his honor but because the circumstances reduced him to an unreasoning mass of blind impulse.  The “true man” doctrine had become the “scared shitless man”  doctrine!

WTF? Who had won? Who had lost?  It was the result the Hierarchical Individualists wanted but without the meaning that the Egalitarian Communitarians loathed.

Holmes had rendered this issue culturally meaningless--and therefore made disputing this one aspect of the law pointless for the dueling cultural factions.

And you know what the best thing is? Holmes did this on purpose!

The truth was, Holmes personally identified with the honor norms that animated the “true man” doctrine.  It resonated with his own pride over having been part of a Civil War regiment that “never ran.”  In his famous 1884 Memorial Day Address, Holmes spoke not of the thoughtless impulses of those who survived hand-to-hand combat, but rather of the “swift and cunning thinking on which once hung life or freedom.”

Writing of the issue in Brown to to his confidant Harold Laski, Holmes explained:

[L]aw must consider human nature and make some allowances for the fighting instinct at critical moments.  In Texas where this thing happened, . . . it is well settled, as you can imagine, that a man is not born to run away . . . .

Yet for Holmes the liberal jurist, the law decidedly was not not a place for civil war even when waged in the weaponry of partisan moralistic and largely symbolic language. Acknowledging how much less passionately he defended the no retreat rule in Brown, Holmes tells Laski, “I don’t say all I think in the opinion.”

Holmes's gambit worked.  The law stayed as it was. But because the “no retreat” principle no longer had any clear cultural resonance, people stopped fighting about it (and focused their attention elsewhere: e.g., on guns, and nuclear power, and climate change).

Until . . . the NRA, a tapeworm of cognitive illiberalism, got a brilliantly evil idea: Mount a campaign in Southern and Western states to get “stand your ground” laws passed!

Sure these new statutes wouldn’t actually change the law. But that wasn’t the point of them. 

The point was to reignite the cultural conflagration that Holmes had snuffed out. By enacting these laws, the NRA predictably provoked today’s egalitarian communitarians, who denounced the laws as certain to unleash a torrent of death and carnage.

That sort of response is really good for the NRA. It gets today’s hierarchical individualists very mad, which makes them give lots of money to the NRA to strike back against the insults that are being hurled upon them!

The sort of media coverage of the study that is the subject of this post is very welcome PR fodder for the NRA too. 

Sigh; where is our Holmes?

But . . . back to the paper!

I’d say the study’s mistaken premise – that the “law changed” in the “stand your ground” states—rises to the level of a serious flaw.  The authors didn’t measure what they thought they were measuring. The thing that their complexly structured statistical model says “caused” something—a change in law in 20 states--didn’t happen.

I’m not really sure, in all honesty, that this problem can be fixed. The commenter who brought the article to my attention wondered if maybe the authors could argue that even though the law didn’t change in so many of the “stand your ground” law states, the enactment of these symbolic laws put citizens who previously didn’t know the law on notice that they didn’t have to retreat and that’s what “explains” the homicide rate going up. 

Interesting, but I myself would feel queasy even attempting this sort of rescue mission here.  If one discovers that what one measured isn’t what one thought, it’s pretty dubious to invent a hypothesis that fits the result one nevertheless managed to find. That’s not materially different, in my view, from just poking around in data and concocting a story after the fact for whatever happened to be significant. But maybe that's just me.

Here’s another interesting thing, though.  While they might have forgotten (or simply never recognized) the heroic liberal statesmanship of Justice Holmes, lawyers, judges, law students and anyone else who had happened to pick up any basic text on criminal law knew that the “true man” doctrine was widespread—indeed, declared by many commentators and courts to be the “majority rule” in the U.S. Naturally, it occurred to scholars long before now to examine whether this position is linked to homicide rates in the (mainly) Southern & Western states that follow it.

The first-rate scholars Nisbett & Cohen wrote a great book, Culture of Honor: The Psychology of Violence in the South, that presented empirical evidence that the “no retreat” standard, along with other manifestations of cultural honor norms, were linked to high homicide rates in the South way back in 1996.

The authors’ very rough draft doesn’t mention Nisbett & Cohen either. If they tried to deal with this now, what would they say? That the “true man” doctrine made homicide rate higher than in “no retreat” sates, and yet the “stand your ground” laws made it go up higher still? Was there some dip in the middle? Perhaps betweeen1994 and 2000, people momentarily “forgot” what the law was in their states was, and were only reminded again by the new “Stand your ground” laws?...

But I myself think it is really not sensible to even try to make sense of results generated by a statistical model that rests on a mistaken factual premise.

Of course, these are matters for the authors to consider. I'm sure they are relieved they circulated their working paper so that they will now have an opportunity to think about these difficulties.

References

Brown, R.M. No Duty to Retreat: Violence and Values in American History and Society (1991).

Kahan, D.M. The Secret Ambition of Deterrence. Harv. L. Rev. 113, 413 (1999).

Kahan, D.M. & Nussbaum, M.C. Two Conceptions of Emotion in Criminal Law. Colum. L. Rev. 96, 269 (1996).

Nisbett, R.E. & Cohen, D. Culture of Honor: The Psyhcology of Violence in the South (1996).

White, G.E. Justice Oliver Wendell Holmes : law and the inner self. (Oxford University Press, New York; 1993).

 

Monday
Jan072013

Cultural vs. ideological extremists: the case of gun control

Look what those nut job socialists & libertarians are saying now: that if we really want to  reduce gun homicides—including the regular shooting of children on street corners in cities like Chicago—we should select one of the myriad sensible alternatives to our current "war on drugs," which predictably spawns violent competition to control a lucrative black market without doing much of anything to reduce either the supply or the demand for banned substances.

They just don’t get it!

So what if an expert consensus report from the National Academy of Sciences “found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime.” Big deal that a Center for Disease Control task force “found insufficient evidence to determine the effectiveness of any of the firearms laws reviewed”—including waiting periods, ammunition bans, child access prevention laws, and “gun free school zones”—“for preventing violence.” 

Who cares that the best available evidence clearly suggests, in contrast, that there are myriad steps we could take (“wholesale legalization” vs. “wholesale criminalization” is a specious dichotomy) that would very appreciably reduce the number of homicides associated with the criminogenic property of our own drug-law enforcement policies?

The point isn’t to save lives! It’s to capture the expressive capital of the law.

Their role (real and fabled) in American history—in overthrowing tyranny and in perpetuating conditions of slavery and apartheid; in taming the frontier and in assassinating Presidents—have imbued guns with a rich surfeit of social meanings. Wholly apart, then, from the effect gun laws have (or don’t) on homicide, they convey messages that symbolically affirm and denigrate opposing cultural styles.

We are a liberal democratic society, comprising a plurality of diverse moral communities. The individual liberty provisions of our Constitution forbid the State to “enforce … on the whole society” standards of “private conduct” reflecting any one community’s “conceptions of right and acceptable behavior.”

So for crying out loud, how will we possibly be able to use State power to resolve whose way of life is virtuous and honorable and whose vicious and depraved if we don’t fixate on laws that have ambiguous public-welfare consequences but express unambiguously partisan cultural meanings?

What’s that? You say that the “war on drugs” should also be viewed as an exercise of expressive power aimed at enforcing a cultural orthodoxy?

Of course. But the partisan meanings that are expressed by those laws are ones that only “ideological extremists”—libertarians, socialists, et al.—would object to.

References

Center for Disease Control.First Reports Evaluating the Effectiveness of Strategies for Preventing Violence: Firearms Laws, Findings from the Task Force on Community Preventive Services (2003). 

Jacobs, J.B. Can gun control work? (Oxford University Press, Oxford ; New York; 2002).

Kahan, D.M.Cognitive Bias and the Constitution of the Liberal Republic of Science, working paper, available at  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2174032.

Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007).

Kahan, D.M. & Braman, D. More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions. U. Pa. L. Rev. 151, 1291-1327 (2003).

Kleiman, M. Marijuana : costs of abuse, costs of control. (Greenwood Press, New York; 1989).

Kleiman, M., Caulkins, J.P. & Hawken, A. Drugs and drug policy : what everyone needs to know. (Oxford University Press, Oxford ; New York; 2011).

MacCoun, R.J. & Reuter, P. Drug war heresies : learning from other vices, times, and places. (Cambridge University Press, Cambridge, U.K. ; New York; 2001).

Musto, D. F. (1987). The American Disease: Origins of Narcotic Control (Expanded ed.). New York: Oxford University Press.

National Research Council (U.S.). Committee to Improve Research Information and Data on Firearms., Wellford, C.F., Pepper, J., Petrie, C. & National Research Council (U.S.). Committee on Law and Justice. Firearms and violence : a critical review. (National Academies Press, Washington, DC; 2004). 

 

 

Saturday
Jan052013

Are *positions* on the deterrent effect of the death penalty & gun control possible & justifiable? Of course!

So I started to answer one of the interesting comments in response to the last post & found myself convinced that the issues involved warranted their own post. So this one "supplements" & "adjusts" the last.

And by the way, I anticipate "supplementing" & "adjusting" everything I have ever said and ever will say.  If you don't see why that's the right attitude to have, then probably you aren't engaged in the same activity I am (which isn't to say that I plan to supplement & adjust every blog post w/ another; that's not the "activity" I mean to be involved in, but rather a symptom of something that perhaps I should worry about, and you too since you seem to be reading this).

Here's the question (from JB):

I'm puzzled about how the NRC dealt with Figure 2 in this paper, the "Canada graph" of Donohue and Wolfers. This is not multiple regression. (I agree that multiple regression is vastly over-used and that statistical control of the sort it attempts to do is much more difficult, if not impossible in many situations). But this graph settled the issue for me. It is not a regression analysis. . . .

Here's my answer:

@JB: The answer (to the question, what did NRC say about Fig. 2 in D&W) is . . . nothing  virtually nothing!

As you note, this is not the sort of multivariate regression analysis that the NRC's expert panel on the death penalty had in mind when it “recommend[ed] that these studies not be used to inform deliberations requiring judgments about the effect of death penalty on homicide.”

Your calling attention to this cool Figure furnishes me with an opportunity to supplement my post in a manner that (a) corrects a misimpression that it could easily have invited; and (b) makes a point that is just plain important, one I know you know but I want to be sure others who read my post do too.

The NRC reports are saying that a certain kind of analysis – the one that is afforded the highest level of respect by economists; that’s an issue that they really should talk about—is not valid in this context. In this context – deterrence of homicide by criminal law (whether gun control or capital punishment) --  these studies don’t give us any more or less reason to believe one thing or the other.

But that doesn’t mean that it is pointless to think about deterrence, or unjustifiable for us to have positions on it, when we are deliberating about criminal laws, including gun control & capital punishment! 

Two points:

First, just because one empirical method turns out to have a likelihood ratio of 1 doesn’t mean all forms of evidence have LR = 1!

You say, “hey, look at this simple comparison: our homicide rate & Candada’s are highly correlated notwithstanding how radically they differ in the use of the death penalty over time. That's pretty compelling!”

I think you would agree with me that that this evidence doesn’t literally “settle the issue.”  We know what people who would stand by their regression analyses (and others who merely wish those sorts of analyses could actually help) would say. Thinks like ... 

  • maybe the use of the death penalty is what kept the homicide rate in the US in “synch” with the Canadian one (i.e., w/o it, the U.S. rate would have accelerated relative to Canada, due to exogenous influences that differ in the 2 nations);
  • maybe when the death penalty isn’t or can’t be (b/c of constitutional probhition) used, legislators "make up the difference" by increasing the certainty of other, less severe punishments, and it is still the case that we can deter for "less" by adding capital punishment to the mix (after getting rid of all the cost-inflating, obstructionist litigation, of course);
  • maybe the death penalty work as James Fitzjames Stephen imagines – as a preference shaping device – and Canadians, b/c they watch so much U.S. TV are morally moulded by our culture (in effect, they are free riding on all our work to shape preferences through executing our citizens--outrageous);
  • variation in US homicide rates in response to the death penalty is too fine-grained to be picked  up by these data, which don’t rule out that the U.S. homicide rate would have decelerated in relation to Canada if the US had used capital punishment more frequently after Gregg;
  • the Donohue and Wolfers chart excludes hockey-related deaths resulting from player brawls and errant slapshots that careen lethally into the stands, and thus grossly understates the homicide rate in Canada (compare how few players and fans have been killed by baseball since Gregg!);
  • etc. etc. etc.   

These are perfectly legitimate points, I’d say. But what is the upshot?

They certainly don’t mean that evidence of the sort reflected in Fig. 2 is entitled to no weight – that its "Likelihood Ratio = 1."  If someone thinks that that’s how empirical proof works – that evidence either “proves” something “conclusively,” or “proves nothing, because it hasn’t ruled out all alternative explanations”—is “empirical-science illiterate” (we need a measure for this!).

These points just present us with reasons to understand why the data in Fig. 2 don't mean LR ≠ ε (if the hypothesis is “death penalty deter”; if hypothesis is “death penalty doesn’t,” then why LR ≠ ∞).

I agree with you that Fig 2 has a pretty healthy LR – say, 0.2, if the hypothesis is “the death penalty deters” – which is to say, that that I believe the correlation between U.S. and Canadian homicide rates is “5 times more consistent with” the alternative hypothesis (“doesn’t deter”).

And , of course, this way of talking is all just a stylized way of representing how to think about this—I’m using the statistical concept of “likelihood ratio” & Bayesianism as a heuristic. I have no idea what the LR really is, and I haven’t just multiplied my “priors” by it.

But I do have an idea (a conviction, in fact) about the sensible way to make sense of empirical evidence. It's that it should be evaluated not as "proving" things but as supplying more or less reason to believe one thing or another. So when one is presented with empirical evidence, one shouldn't say either "yes, game over!" or "pfffff ... what about this that & the other thing..." but rather should supplement & adjust what one believes, and how confidently, after reflecting on the evidence for a long enough time to truly understand why it supports a particular infernece and how strongly.

Second, even when we recognize that an empirical proposition relevant to a policy matter admits of competing, plausible  conjectures (they don't have to be “equally plausible”; only an idiot says that the “most plausible thing must be true!”), and that it would be really really nice to have more evidence w/ LR ≠ 1, we still have to do something.  And we can and should use our best judgment about what the truth is, informed by all the “valid” evidence (LR ≠ 1) we can lay our hands on.

I think people can have justifiable beliefs about the impact (or lack thereof) of gun control laws & the death penalty on homicide rates!

They just shouldn't abuse reason. 

They do that when they insist that bad statistical proofs -- simplistic ones ones that just toss out arbitrary bits of raw data; or arbitrarily complex yet grossly undertheorized ones like "y =b1*x1+ b2*x2 +b3*x3 ... +b75*x34^3 + ..." – “conclusively refute” or “demonstrably establish” blah blah blah.

And they do that and something even worse when they mischaracterize the best scientific evidence we do have.

Thursday
Jan032013

A Tale of (the Tales Told About) Two Expert Consensus Reports: Death Penalty & Gun Control

What is the expert consensus on whether the death penalty deters murders—or instead increases them through a cultural “brutalization effect”?

What is the expert consensus on whether permitting citizens to carry concealed handguns in the public increases homicide—or instead decreases it by discouraging violent predation?

According to the National Research Council, the research arm of the National Academy of Sciences, the expert consensus answer to these two questions is the same:

It’s just not possible to say, one way or the other.

Last April (way back in 2012), an expert NRC panel charged with determining whether the “available evidence provide[s] a reasonable basis for drawing conclusions” about the impact of the death penalty

concluded that research to date on the effect of capital punishment on homicide is not informative about whether capital punishment decreases, increases, or has no effect on homicide rates. Therefore, the committee recommends that these studies not be used to inform deliberations requiring judgments about the effect of the death penalty on homicide. Consequently, claims that research demonstrates that capital punishment decreases or increases the homicide rate by a specified amount or has no effect on the homicide rate should not influence policy judgments.

Way way back in 2004 (surely new studies have come out since, right?), the expert panel assigned to assess the “strengths and limitations of the existing research and data on gun violence,”

found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime, and there is almost no empirical evidence that the more than 80 prevention programs focused on gun-related violence have had any effect on children’s behavior, knowledge, attitudes, or beliefs about firearms. The committee found that the data available on these questions are too weak to support unambiguous conclusions or strong policy statements.

The expert panels’ determinations, moreover, were based not primarily on the volume of data available on these questions but rather on what both panels saw as limitations inherent in the methods that criminologists have relied on in analyzing this evidence. 

In both areas, this literature consists of multivariate regression models. As applied in this context, multivariate regression seeks to extract the causal impact of criminal laws by correlating differences in law with differences in crime rates “controlling for” the myriad other influences that could conceivably be contributing to variation in homicide across different places or within a single place over time. 

Inevitably, such analyses involve judgment calls. They are models that, like  many statistical models, must make use of imprecise indicators of unobserved and unobservable influences, the relationship of which to one another must be specified based on a theory that is itself independent of any evidence in the model.

The problem, for both the death penalty and concealed-carry law regression studies, is that results come out differently depending on how one constructs the models.

“The specification of the death penalty variables in the panel models varies widely across the research and has been the focus of much debate,” the NRC capital punishment panel observed. “The research has demonstrated that different death penalty sanction variables, and different specifications of these variables, lead to very different deterrence estimates—negative and positive, large and small, both statistically significant and not statistically significant."

That’s exactly the same problem that the panel charged with investigating concealed-carrry laws focused on:

The committee concludes that it is not possible to reach any scientifically supported conclusion because of (a) the sensitivity of the empirical results to seemingly minor changes in model specification, (b) a lack of robustness of the results to the inclusion of more recent years of data (during which there were many more law changes than in the earlier period), and (c) the statistical imprecision of the results.

This problem, both panels concluded, is intrinsic to the mode of analysis being employed. It can’t be cured with more data; it can only be made worse as one multiplies the number of choices that can be made about what to put in and what to leave out of the necessarily complex models that must be constructed to account for the interplay of all the potential influences involved.

“There is no empirical basis for choosing among these [model] specifications,” the NRC death penalty panel wrote.

[T]here has been heated debate among researchers about them.... This debate, however, is not based on clear and principled arguments as to why the probability timing that is used corresponds to the objective probability of execution, or, even more importantly, to criminal perceptions of that probability. Instead, researchers have constructed ad hoc measures of criminal perceptions. . . .

Even if the research and data collection initiatives discussed in this chapter are ultimately successful, research in both literatures share a common characteristic of invoking strong, often unverifiable, assumptions in order to provide point estimates of the effect of capital punishment on homicides.

The NRC gun panel said the same thing:

It is also the committee’s view that additional analysis along the lines of the current literature is unlikely to yield results that will persuasively demonstrate a causal link between right-to-carry laws and crime rates (unless substantial numbers of states were to adopt or repeal right-to-carry laws), because of the sensitivity of the results to model specification. Furthermore, the usefulness of future crime data for studying the effects of right-to-carry laws will decrease as the time elapsed since enactment of the laws increases. If further headway is to be made on this question, new analytical approaches and data sets will need to be used.

So to be sure, the NRC  reached its “no credible evidence" conclusion  on right-to-carry laws way back in 2004. But its conclusion was based on “the complex methodological problems inherent in” regression analysis--the same methodological problem that were the basis of the NRC’s 2012 conclusion that death penalty studies are "not informative" and "should not influence policy judgments."

Nothing's changed on that score. The experts at the National Academy of Sciences either are right or they are wrong to treat multivariate regression analysis as an invalid basis for inference about the effects of criminal law.

The reasoning here is all pretty basic, pretty simple, something that any educated, motivated person could figure out by sitting down with the reports for a few hours (& who wouldn't want to do that?!).

Yet all of this has clearly evaded the understanding of many extremely intelligent, extremely influential participants in our national political conversation.

I’ll pick on the New York Times, not because it is worse than anyone else but because it’s the newspaper I happen to read everyday.

Just the day before yesterday, it said this in an editorial about the NRC’s capital punishment report:

A distinguished committee of scholars convened by the National Research Council found that there is no useful evidence to determine if the death penalty deters serious crimes. Many first-rate scholars have tried to prove the theory of deterrence, but that research “is not informative about whether capital punishment increases, decreases, or has no effect on homicide rates,” the committee said.

Okay, that’s right. 

But here is what the Times’ editorial page editor said the week before last about concealed carry laws:

Of the many specious arguments against gun control, perhaps the most ridiculous is that what we really need is the opposite: more guns, in the hands of more people, in more places. If people were packing heat in the movies, at workplaces, in shopping malls and in schools, they could just pop up and shoot the assailant. . . . I see it differently: About the only thing more terrifying than a lone gunman firing into a classroom or a crowded movie theater is a half a dozen more gunmen leaping around firing their pistols at the killer, which is to say really at each other and every bystander. It’s a police officer’s nightmare. . . . While other advanced countries have imposed gun control laws, America has conducted a natural experiment in what happens when a society has as many guns as people. The results are in, and they’re not counterintuitive.

Wait a sec.... What about the NRC report? Didn’t it tell us that the “results are in” and that "it is not possible to reach any scientifically supported conclusionon whether concealed carry laws increase or decrease crime?

I know the New York Times is aware of the NRC’s expert consensus report on gun violence. It referred to the report in an editorial just a couple days earlier.

In that one, it called on Congress to enact a national law that would require the 35 states that now have permissive “shall issue” laws—ones that mandate officials approve the application of any person who doesn’t have a criminal record or history of mental illness—to “set higher standards for granting permits for concealed weapons.”  “Among the arguments advanced for these irresponsible statutes,” it observed,

is the claim that ‘shall issue’ laws have played a major role in reducing violent crime. But the National Research Council has thoroughly discredited this argument for analytical errors. In fact, the legal scholar John Donohue III and others have found that from 1977 to 2006, ‘shall issue’ laws increased aggravated assaults by “roughly 3 to 5 percent each year.

Sigh.

Yes, the NRC concluded that there was “no credible evidence” that concealed carry laws reduce crime.

But as I pointed out, what it said was that it “found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime.” So why shouldn't we view the Report as also “thoroughly discrediting” the Times editorial’s conclusion that those laws“seem almost designed to encourage violence?”

And, yes, the NRC can be said (allowing for loose translation of more precise and measured language) to have found “analytical errors” in the studies that purported to show shall issue laws reduce crime. 

But those “analytical errors,” as I’ve pointed out, involve the use of multivariate regression analysis to try to figure out the impact of concealed carry laws. That’s precisely the sort of analysis used in the Donohue study that the Times identifies as finding shall issue laws increased violent crime. 

The “analytical errors” that the Times refers to are inherent in the use of multivariate regression analysis to try to understand the impact criminal laws on homicide rates. 

That’s why the NRC’s 2012 death penalty report said that findings based on this methodology are “not informative” and “should be ignored for policy analysis.”

The Times, as I said, got that point. But only when it was being made about studies that show the death penalty deters murder, and not when it was being made about studies that find concealed carry laws increase crime....

This post is not about concealed carry laws (my state has one; I wish it didn’t) or the death penalty (I think it is awful).

It is about the obligation of opinion leaders not to degrade the value of scientific evidence as a form of currency in our public deliberations.

In an experimental study, the CCP found that citizens of diverse cultural outlooks all believe that “scientific consensus” is consistent with the position that predominates within their group on climate change, concealed carry laws, and nuclear power.  Members of all groups were correct – 33% of the time.

How do ordinary people (ones like you & me, included) become so thoroughly confused about these things?

The answer, in part, is that they are putting their trust in authoritative sources of information—opinion leaders—who furnish them with a distorted, misleading picture of what the best available scientific evidence really is.

The Times, very appropriately, has published articles that attack the NRA for seeking to block federal funding of the scientific study of firearms and homicide.  Let’s not mince words: obstructing scientific investigation aimed at promoting society’s collective well-being is a crime in the Liberal Republic of Science.

But so is presenting an opportunistically distorted picture of what the state of that evidence really is.

The harm that such behavior causes, moreover, isn’t limited to the confusion that such a practice creates in people who (like me!) rely on opinion leaders to tell us what scientists really believe.

It includes as well the cynicism it breeds about whether claims about scientific consensus mean anything at all.  One day someone is bashing his or her opponents over the head for disputing or distorting “scientific consensus”—and the next day that same someone can be shown (incontrovertibly and easily) to be ignoring or distorting it too.

By the way, John Donohue is a great scholar, one of the greatest empirical analysts of public policy ever.

Both of the NRC expert consensus reports that I’ve cited conclude that studies he and other econometricians have done are “not informative” for policy because of what those reports view as insuperable methodological problems with multivariate analysis as a tool for understanding the impact of law on crime.

Donohue disagrees, and continues to write papers reanalyzing the data that the NRC (in its firearms study) said are inherently inconclusive because of "complex methodological problems" inherent in the statistical techniques that Donohue used, and continues to use, to analyze them.

But that’s okay.

You know what one calls a scientist who disputes “scientific consensus”?

A scientist.

But that’s for another day. 

Wednesday
Jan022013

Chewing the fat, so to speak...

I've already exhausted my allotted time for blogging in answering interesting comments related to the post on Silver's climate change wisdom. I invite others to weigh in (but not on whether Mann is a great climate scientist; see my post update on that).

In particular, I'd like help (Larry has provided a ton, but I'm greedy) on what is right/wrong/incisive/incomplete/provocative/troubling/paradoxical/inherently contradictory etc. about my statement, "Gaps between prediction and reality are not evidence of a deficiency in method. They are just evidence--information that is reprocessed as part of the method of generating increasingly precise and accurate probabilistic estimates." Also the questions of (a) how forecasting model imprecision or imperfection should affect policymaking proposals & even more interesting (given the orientation of this blog) (b) how to communicate or talk about this practical dilemma. (Contributions should be added to that comment thread.)

Two more things to think about, complements of Maggie Wittlin:

1. Who is afraid of obesity & why?  Maggie notes "new meta-analysis finds that overweight people (and, with less confidence, people with grade 1 obesity) have a lower risk of mortality than people with BMIs in the 'normal' range" and wonders, as do I, how cultural outlooks or other sources of motivated reasoning affect reactions to evidence like this -- or of the health consequences of obesity generally.

2. Forget terrorism; we're all going to die from an asteroid. Maggie also puts my anxiety about magnitude 7-8-9 terrorism into context by pointing out that the size/energy-releasing-potential of asteroid impacts on earth also follow a power-law distribution.  Given the impact (so to speak) of civilization-destroying asteroid collision, isn't preparing to protect earth from such a fate (however improbable) yet another thing that we need to do but are being distracted from doing by OHS's rules on removing shoes at airport security-screening stations?! I could do some research but Aaron Clauset's spontaneous & generous supply of references for the likelihood of "large" terrorism attacks makes me hope that some other generous person who knows the literature here will point us to useful sources.

Monday
Dec312012

Wisdom from Silver’s Signal & Noise, part 2: Climate change & the political perils of forecasting maturation

This is post 2 in my three part series on Silver’s Signal & Noise, which tied for first (with  Sharon Bertsch McGrayne’s The Theory That Would Not Die) in my “personal book of the year” contest (I’ve already mailed them both the quantity of gold bullion that I always award to the winner—I didn’t even divide it in half; or maybe I did, or possibly I even doubled or tripled it).

It turns out that Silver is not only amazingly good at statistical modeling & pretty decent at story telling. He also happens to be pretty wise (obviously this is a limited sample & I’ll update based on new information etc).

The nugget of wisdom I mined out of the book in the first post had to do with Silver’s idea that we should treat terrorist attacks a bit more like earthquakes.

This time I want to make a report on what Silver had to say about climate-change forecasting. One way to understand his assessment is that the practitioners of it are being punished for their methodological virtue. 

Silver essentially structures the book around prototypes. There’s baseball, which is to forecasting what Saudi Arabia is to oil drilling. There are elections, another data-rich field but one that gets screwed up by a combination of bad traits in those who prognosticate (they are full of themselves) and those who are consuming their prognostications (too many of them want to be told only what they want to hear).

And earthquakes—can’t be forecast, but can still yield lots of info.

And economics--a bastion of bad statistics hygiene.

Then there’s meteorology, which is the archetype prototype of forecasting excellence because it is super hard and yet has made measureable progress (that’s much higher praise than “immeasurable,” in this context) due to the purity and discipline of its practitioners. (I’m eager to see who gets to play Richard Loft, the director of Technology Development at NCAR in the upcoming movie adaptation of Signal; I’m guessing Pierce Brosnan, unless he is cast as Silver himself).

In Silver’s account, climate forecasting is traveling the path of meteorology. The problem is that emulating the meteorologists obliges climate forecasters to become unwitting manufacturers of the ammunition being directed against them in the political flack storm surrounding climate change.

One of the things that meteorology forecasters did that makes them the superheroes of Signal was calibration. They not only made prodigious predictions but then revisited and retooled their models in light of how close they came to their targets, thereby progressively improving their aim.

When climate forecasters do this—as they must—they leave themselves wide open to guerilla attack by those seeking to repel the advance of science. The reason is that error is an inevitable and indeed vitally productive element of the Bayesian-evolutionary process that characterizes the maturation of valid forecasting.

Gaps between prediction and reality are not evidence of a deficiency in method. They are just evidence, information that is reprocessed as part of the method of generating increasingly precise and accurate probabilistic estimates.

This is a subtle point to get across even if one is trying to help someone to actually understand how science works. But for those who are trying to confuse, the foreseeable generation of incorrect predictions furnishes a steady supply of resources with which to harass and embarrass and discredit earnest scientists.

Silver recounts this dilemma in explicating the plight of James Hansen, whose forecasts from 30 and 25 years were in many respects impressively good but just as importantly instructively wrong. Ditto for the IPCC’s 1990 predictions.

Another thing that the superhero meteorologists did right was, in effect, theorize. They enriched their data with scientific knowledge that enabled them to do things like create amazing simulations of the dynamics they were trying to make predictive sense of. As a result, they got a lot further than they would have if they had used brute statistical force alone.

Climate forecasters are doing this too, and as a result necessarily enlarging the target that they offer for political sniping.  The reason is that theory-informed modeling of dynamic systems is hard work, the payoffs of which are unlikely to accumulate steadily in a linear fashion but rather to accrue in incremental breakthroughs punctuated by periods of nothing.

Indeed, those who travel this path might well seem to be make slower progress at least temporarily than those who settle for simpler, undertheorized number-crunching strategies, which make fewer assumptions and thus expose themselves to fewer sources of error, which tend to compound within dynamic models. Silver notes, for example, that some of Hansen’s earlier predictions—which were in the nature of simple multivariate regressions—in some respects outperformed some of his subsequent, dynamic-simulation driven ones.

Again, then, the virtuous forecaster will, precisely as a result of being virtuous, find him- or herself vulnerable to opportunistic hectoring, particularly by anti-science, lawyerly critics who will adroitly collect and construct number-crunching models that generated more conservative predictions and thereby outperformed the more theoretically dynamic ones over particular periods of time (including ones defined by happenstance or design to capitalize on inevitable and inevitably noisy short-term fluctuations in things like global temperatures).

Silver mentions the work of Scott Armstrong, a serious forecaster who nevertheless confines himself to simple number-crunching and consciously eschews the sort of theory-driven enrichment that was the signature of meteorology’s advancement. “I actually try not to learn a lot about climate change,” Armstrong, who is famous for his “no change” forecast with respect to global temperatures, boasts. “I am a forecasting guy” (Signal, p. 403).

“This book advises you to be wary of forecasters who say that science is not very important to their jobs,” Silver writes, just as it advises us to be skeptical toward “scientists who say that forecasting is not important to their[s] . . . . What distinguishes science, and what makes a forecast scientific, is that it is concerned with the objective world. What makes forecasts fail is when our concern only extends as far as the method, maxim, or model” (p. 403).

For Silver, the basic reason to “believe” in—and be plenty concerned about—climate change is the basic scientific fact, disputed by no one of any seriousness, that increasing concentrations of atmospheric CO2 (also not doubted by anyone) conduce to increasing global temperatures, which in turn have a significant impact on the environment. Forecasting is less a test of that than a vital tool to help us understand the consequences of this fact, and to gauge the efficacy (including costs and benefits) of potential responses.

Seems right to me. Indeed, seems wise.

* * * *

Okay, here’s something else that I feel I ought to say.

One reason I was actually pretty excited to get to the climate forecasting chapter was to verify an extremely critical review of the book (issued well before the release date of it) by Michael Mann, climate scientist of “hockey stick” fame.

Frankly, I find the gap between Mann’s depiction and the reality of what Silver said disturbing. You’d get the impression from reading Mann’s review that Silver is a “Chicago School” “free market fundamentalist” who dogmatically attacks the assumptions and methods of climate forecasters.

Just not so. I’m mean really really really untrue.

Mann figures very briefly at the end of the chapter, where Silver reports Mann’s reaction to what is in fact the chapter's central theme—that climate forecasting is exposed to political perils precisely because those engaged in it are taking an uncompromisingly scientific approach.

Mann is obviously—understandably and justifiably!—frustrated and filled with anger.

He describes climate scientists themselves as being involved in a “street fight with these people”—i.e., the professional “skeptics” who hector and harass, distort and mislead (p. 409).

Of course, that’s a response that sees fighting as something climate scientists ought to be doing.

“It would be irresponsible for us as a community to not be speaking out,” Mann explains.

“Where you have to draw the line is to be very clear about where the uncertainties are,” he allows, but it would be a mistake to “have our statements so laden in uncertainty that no one even listens to what we’re saying.”

Silver doesn’t say this—indeed, had no reason to at the time he wrote the book—but I have to wonder whether Mann’s savage reaction to Silver is part of Mann’s “street fighting” posture, which apparently includes attacking even intellectually and emotionally sympathetic commentators whose excessive reflection on climate forecasting “uncertainty”  threatens to prevent the public from even “listen[ing] to what we’re saying.”

Mann is a great climate scientist. He is not a scientist of science communication.

For those who do study and reflect on science communication, whether simplifying things or dispensing with qualifications (not to mention outright effacing of complexity) will promote open-minded public engagement with climate science are matters characterized by uncertainties analogous to the ones that climate change forecasters deal with.

But I think one thing that admits of no uncertainty is that neither climate scientists nor scientists of science communication nor any other scientifically minded person should resort to simplification, effacement of complexity, and disregard for intellectual subtlety in describing the thoughtful reflections of a scholarly minded person who is trying to engage openly and candidly with complicated issues for the benefit of curious people.

That’s a moral issue, not an empirical one, and it goes to the nature of what the enterprise of scholarly discussion is all about.