follow CCP

Recent blog entries
Tuesday
Nov202012

The Liberal Republic of Science, part 3: Popper's Revenge....

The politics of risk regulation is marked by a disorienting paradox. 

At no time in history has a society possessed so much knowledge relevant to securing the health, safety, and prosperity of its members.  Yet never has the content of what is collectively known-- from the reality of climate change to the efficacy of the HPV vaccine, from the impact of gun control on crime to the impact of tax cuts on public revenue--been a source of more intense and persistent political conflict.

We live in a liberal democratic society. We are thus free of the violent sectarian struggles that have decimated human societies from the beginning of history, and that continue to rage in those regions still untamed by the pacifying influence of doux commerce.

Yet we remain spectacularly factionalized—not over whose conception of virtue will be forcibly imposed on us by the state, but over whose view of the facts will be credited in the democratic processes we use to promulgate policies aimed at securing the wellbeing of all.

This is Popper’s Revenge—a tension inherent in, and potentially destructive of, the constitution of the Liberal Republic of Science.

In the first of this series of posts on the Liberal Republic of Science, I identified what sort of thing the Liberal Republic Science is: a political regime, or collective way of life animated by a foundational set of commitments that shape not only its institutions of government but also its citizens’ habits of mind and norms of engagement across all domains of social and private life.

In the second, I described the Liberal Republic of Science’s animating idea: the mutually supportive relationship of political liberalism and scientific inquiry.  In The Logic of Scientific Discovery, Popper identifies science’s signature way of knowing with the amenability of any claim to permanent empirical challenge.  The vitality of this distinctive mode of inquiry, in turn, presupposes  Popper’s Open Society: only in a state that disclaims the authority to orchestrate collective life in pursuit of rationally ascertained, immutable truths will individuals develop the disputatious and inquisitive habits of mind, and society the competitive norms of intellectual exchange, that fuel the scientific engine of conjecture and refutation.

The cultural polarization we today observe over risks and how to abate them, I now want to argue, is in fact a byproduct of the very same characteristics that make a liberal society conducive to the acquisition of scientific knowledge.

Obviously, the collective knowledge ascertained by science will far exceed what any individual (layperson or scientist) can hope to understand much less verify for him- or herself. As a result, there must be reliable social mechanisms for certifying and transmitting what’s known to science--that is, for certifying and transmitting what’s known to us collectively through science’s signature mode of inquiry.

Popper himself recognized this.  He mocked (gently; he was not ungrateful to the nation that saved him from National Socialism) English sensory empiricism, which asserts that first-hand observation is the only valid foundation for knowledge. What enables the members of a liberal democratic society to participate in the superior knowledge that science conveys is not their “refusal to take anyone one’s word for it” (nullius in verba, the motto of the Royal Society) but rather their reliance on the words of those who will reliably certify as “true” only those claims originating in the use of science’s distinctive mode of knowing.

In a liberal society, however, there will always be a plurality of such truth certifiers.  People naturally acquire their personal knowledge of what’s collectively known within a cultural community, whose members trust and understand each other. The citizens of the Liberal Republic of Science are culturally diverse—historically so.  As the number of facts known to science multiplies, the prospect of disagreement among these plural systems of certification becomes a statistical certainty.

Such conflicts, moreover, feed on themselves. The conspicuous association between opposing positions and opposing groups transforms factual beliefs into emblems of identity.  Policy determinations become referenda—not over the weight of the evidence in support of competing empirical claims but over the honesty and competence of competing cultural constituencies.  Otherwise nonpartisan citizens are impelled to pick sides in what they are now constrained to experience as a “struggle for the soul” of their society.

As deliberations over risk transmute into polarizing forms expressive status conflict, the citizens of the Liberal Republic of Science are denied the two principal goods distinctive of their political regime: policies reliably informed by the immense collective knowledge at their society’s disposal; and state neutrality toward the choices they make, exercising their autonomous reason in common with others, about what counts as a worthy and virtuous way of life.

As I explained in my last post, the nourishment that liberal political culture furnishes scientific inquiry is one half of Liberal Republic of Science’s animating idea. The other is the reciprocal nourishment that science furnishes the cultural of liberal democracy, whose citizens it thrills and inspires and teaches to think.

I acknowledged, too, at the end of the post, that many of you might question my suggestion that the U.S. is a Liberal Republic of Science, precisely because you might doubt my suggestion that the citizens of the U.S. are one in the view that science’s way of knowing is the best one.  I surmised that you might perceive instead that the U.S. is in fact a “house divided” between those who want to perfect the Liberal Republic of Science and those who want to destroy it.

My claim now is that this very perception itself is part and parcel of Popper’s Revenge.

The conflict over climate change is not one between those who accept science’s way of knowing and those who don’t.

The conflict over nuclear power is not one between those who accept science’s way of knowing and those who don’t.

The conflict over the HPV vaccine, over guns, over GM foods—none of these is between those who accept science’s way of knowing and those who don’t.

Those on both sides of all these issues mistakenly think that this is so only because of the dynamics I have been discussing.  And making these mistakes, they predictably form the mistaken perception that those who disagree with them on these issues are anti-science.

But this last mistake is arguably the one that harms them the most. For it is the barrier that Popper’s Revenge puts in the way of their seeing that they are all citizens of the Liberal Republic of Science that obscures their apprehension of the interest they share in using the science of science communication to perfect this very defect in their political regime.

That will the the topic of my final post in this series.

References

Kahan, D.M. (2012). Cognitive Bias and the Constitution of the Liberal Republic of Science, CCP working paper.

Monday
Nov192012

The Liberal Republic of Science, part 2: What is it?!

This is the second in what will be four posts (I think; post-number forecasting is not yet as reliable a science as sabermetrics or meteorology)  on the Liberal Republic of Science.

The first one set the groundwork by discussing the concept of a political regime, which in classical philosophy refers to a type of government characterized by an animating principle that not only determines the structure of its sovereign authority but also pervasively orients the attitudes and interactions of its citizens throughout all domains of social and private life.

The Liberal Republic of Science is a political regime. Its animating principle is the mutually supportive relationship—indeed, the passionately reciprocal envelopment—of political liberalism and scientific inquiry.  That’s the point I now want to develop.

The essential place to start, of course, is with Popper.  It is a testament not to the range of his intellectual interests but rather to the obsessive singularity of them that Popper wrote both The Logic of Scientific Discovery and The Open Society and Its Enemies.

Logic, the greatest contribution ever to the philosophy of science, famously identifies a state of competitive provisionality as integral to science’s signature mode of knowing.  For science, no one has the authority to say definitively what is known; and what is known is never known with finality.  The basic epistemological claim science makes is that our only basis for confidence in a claim about how the world works is its ongoing success in repelling any attempts to empirically refute it.  We must understand "truth” to be nothing more than the currently best-supported hypothesis.

Open Society—a paean to liberal philosophy and liberal institutions—identifies liberal democracy as the only form of political life conducive to this way of knowing.  Systems governed by managerial programs calibrated to one or another rationalist vision invariably erect barriers of interest and error in the path of scientific inquiry. But even more fundamentally, because they authoritatively certify truth, and thereafter bureaucratically mould social life to it, such systems stifle formation of the individual dispositions and social norms that fuel the engine of scientific discovery.

The nourishing environment that liberal democratic culture supplies for science is thus one part of the idea of the Liberal Republic of Science. The reciprocal nourishment that science furnishes the culture of liberal democracy is the other.

The citizens of the Liberal Republic of Science remark their dedication to science’s distinctive way of knowing throughout all spheres of life, sometimes in overt and openly celebratory ways but even more often and more significantly in wholly unnoticed ways, through ingrained patterns of behavior and unconscious habits of mind.

They naturally—more or less unquestioningly, as if it hadn’t even occurred to them that there was any alternative—seek guidance from those whose expertise reflects science’s signature mode of knowing when they are making personal decisions (about their health, e.g.).

They accept—consciously; if you suggested they shouldn’t do this, they’d think you were mad—that public policy relating to their common welfare (e.g., laws aimed at discouraging criminality—or at assuring efficiently operating capital markets) should be informed by the best available scientific evidence.

They seek as best they can to think for themselves in a manner that credits science’s distinctive way of knowing. That is, they believe that the best way to answer a personal question—which automobile should I buy? Which candidate should I vote for President? Who should I marry?—is to gather up and weigh relevant pieces of evidence. The notion that this just is the right way for an individual to use his or her mind is also very distinctive historically, and still far from universal across societies today.

And finally, the citizens of the Liberal Republic of Science intrinsically value science’s way of knowing. 

They admire those who are excellent at it.

The are thrilled and awed by what this way of knowing reveals to them about the way the world works.

They expend significant collective resources to promote it, not just because they see doing so as a prudent investment that will make their lives go better (although they are stunningly confident that this is so), but because it seems right to them to enable the form of human excellence that it displays, and to create the sort of remarkable insight that it generates….

Do we, in the U.S., live in the Liberal Republic of Science?

It is in the nature of political regimes to be imperfectly realized.  Or to put it differently it is in the nature of being a political regime of a particular sort for its members to recognize the ways in which their society’s institutions and norms do not perfectly reflect that regime’s animating idea, and to feel urgently impelled to remedy such imperfections.  I mentioned in the last post, e.g., Lincoln’s understanding of the imperfection of the American political regime as one animated by the idea of equality, and what this meant for him in confronting political compromises to avert the Civil War.

So while I am troubled by the many ways in which the U.S. only imperfectly embodies the idea of the Liberal Republic of Science, the imperfections do not trouble me in classifying the U.S. as a regime of this sort. (Certainly it is not the only one, either!)

I do anticipate, though, that some of the readers of this post might disagree—not because they are uncommitted to the idea of the Liberal Republic of Science but because they are unconvinced that their fellow citizens actually are.  In fact, they perceive that the U.S. is bitterly divided between a constituency that supports the Liberal Republic of Science and another that is implaccably hostile to it--that a civil war of sorts might even be looming over the role of science in American democracy.

This is a misperception I need to take up. And I will in the next post, in which will I address “Popper’s Revenge,” a paradox inherent in, and potentially destructive of, the constitution of the Liberal Republic of Science.

References

Popper, K. R. (1959). The logic of scientific discovery. New York,: Basic Books.

Popper, K. R. (1945). The open society and its enemies. London,: G. Routledge & sons. 

Nos. OneThree & Four in this series.

Sunday
Nov182012

The Liberal Republic of Science, part 1: the concept of “political regime”  

I sometimes refer to the Liberal Republic of Science, and a thoughtful person has asked me to explain just what it is I’m talking about.  So I will.

But I want to preface my account—which actually will unfold over the course of several posts—with a brief discussion of the sort of explanation I will give.

One of the useful analytical devices one can find in classical political philosophy  is the concept of “political regimes.” "Political regimes” as used there doesn't refer to identifiable ruling groups within particular nations (“the Ceausescu regime,” etc.)—the contemporary connotation of this phrase—but rather to distinctive forms of government.

Moreover, unlike classification schemes used in contemporary political science, the classical notion of  “political regimes” doesn’t simply map such forms of government onto signature institutions (“democracy = majority rule”; “communism = state ownership of property,” etc.). Instead, it explicates such forms with respect to foundational ideas and commitments, which are understood to animate social and political life—determining, certainly, how sovereign power is allocated across institutions, but also deeply pervading all manner of political and even social and private life.

If one uses this classification strategy, then, one doesn’t try to define forms of government with reference to some set of necessary and sufficient characteristics. Rather one interprets them by elaborating how their most conspicuous features manifest their animating principle, and also how their animating principle makes sense of seemingly peripheral and disparate, or maybe in fact very salient and connected but otherwise puzzling, elements of them.

In addition, while one can classify political regimes in seemingly general, ahistorical terms—as, say, Aristotle did in discussing the moderate vs. the immoderate species of “democracy,” “aristocracy” vs. “oligarchy,” and “monarchy” vs. “tyranny”—the concept can be used too to explicate the way of political life distinctive of a particular historical or contemporary society. Tocqueville, I’d say, furnished these sorts of accounts of the American political regime in Democracy in America and the French one prior to the French Revolution in L’ancien Régime, although he admittedly saw both as instances of general types (“democracy,” in the former case, “aristocracy” in the latter).

For another, complementary account of the “American political regime,” I’d highly recommend Harry Jaffa’s Crisis of the House Divided: An Interpretation of the Lincoln-Douglas Debates (1959). Jaffa was joining issue with other historians, who at the time were converging on a view of Lincoln as a zealot for opposing the pragmatic Stephen Douglas, who these historians believed could have steered the U.S. clear of the Civil War.  Jaffa depicts Lincoln as motivated to preserve the Union as a political regime defined by an imperfectly realized principle of equality. Because Lincoln saw any extension of slavery into the Northwest Territories as incompatible with the American political regime's animating principle, he viewed Douglas’s compromise of  “popular sovereignty” as itself destructive of the Union.

So what is the Liberal Republic of Science?  It’s a political regime, the animating principle of which is the mutually supportive relationship of  political liberalism and scientific inquiry, or of the Open Society and the Logic of Scientific Discovery.

Elaboration of that idea will be the focus of part 2 of this series.

The distinctive challenge that the Liberal Republic of Science faces—one that stems from a paradox intrinsic to its animating principle—will be the subject of part 3.

And the necessary role that the science of science communication plays in negotiating that challenge will be the theme of part 4.

So long!

References

Aristotle (1958). The politics of Aristotle (E. Barker, Trans.). New York,: Oxford University Press. 

Jaffa, H. V. (1959). Crisis of the house divided; an interpretation of the issues in the Lincoln-Douglas debates (1st ed.). Garden City, N.Y.,: Doubleday.

Tocqueville, A. de (1969). Democracy in America (G. Lawrence, Trans.; J.P. Mayer, ed.). Garden City, N.Y.,: Doubleday.

Tocqueville, A. de (2011). Tocqueville : The Ancien Régime and the French Revolution (J. Elster & A. Goldhammer, Trans.). New York, NY: Cambridge University Press.

Nos. Two, Three & Four in this series.

Friday
Nov162012

Science communication & judicial-neutrality communication look the same to me

Gave a talk at cool conference on Supreme Court and the Public at Chicago-Kent Law School. Co-panelists included Dan Simon & "evil Dr. Nick" Scurich, my colleague Tom Tyler, and Carolyn Shapiro, all of whom gave great presentations. This is a set of notes I prepared the morning of the talk; I spoke extemporarneously, but made essentially these points. Slides here.

What is the relationship between the public communication of science and the public communication of judicial neutrality? When I look at them, I see the same thing--& so should you.

 1. Pattern recognition is an unconscious (or preconscious) process in which phenomena are matched with previously acquired stores of mental prototypes in a way that enables a person reliably to perform one or another sort of mental or physical operation. The classic example is chick sexing: day-old chicks, whose fuzzy genitalia admit of no meaningful visual differences, are unerringly segregated by gender by trained professionals who have learned to see the difference between males & females but who can't actually say how.

In fact, though, pattern recognition is not all that exotic & is super ubiquitous: it's the form of cognition ordinary people use to discern others' emotions, chess grand masters to identify good moves, intelligence analysts to interpret aerial photos, forensic auditors to detect fraud, etc.

I'm going to be asserting that pattern recognition is part of both expert scientific judgment and expert legal judgment, & that it is the gap between expert and public prototypes that generates conflict about both.

2. Margolis's masterpiece set Patterns, Thinking & Cognition & Dealing with Risk link divergence between public and expert risk assessments to breakdowns in the translation of insights gleaned by use of the experts' pattern-recognition faculties into information the public can understand using theirs.

a. For Margolis, all cognition is a form of pattern recognition. Expert judgment consists in the acquisition and reliable accessing of distinctive inventories of patterns—or prototypes—that are suited to the experts’ domain. Necessarily, members of the public lack those prototypes, and if unaided by experts use alternative, lay ones to make sense of phenomena from that domain.

b. The point of science communication is to make it possible for members of the public to be guided by the experts. It does that not by making it possible for members of the public to know what scientists know; that’s not possible because members of the pubic lack the prototypes that would enable them to see what the scientists see. Instead, science communication engages another, distinct set of prototypes that members of the public use to recognize who knows what about what.   The transmission of expert knowledge to nonexperts is mediated by another set of pattern-recognition enabling prototypes that members of the public use to figure out who knows what about what. This mediating system of prototypes is usually very reliable – people are, in effect, experts at figuring out who the experts are and what they are trying to say.

c. Nevertheless, there are some sorts of identifiable, recurring confounds  that block or distort the processing of transmission of scientific knowledge to the public.  The problem isn’t that the public can’t “understand” what the experts know – i.e., see what the experts see – because that’s always the case, even when the public converges on the positions supported by expert judgment. Rather, the difficulty is that the mediating prototypes are not up to the task of enabling the public to see “who knows what about what.” The result is a state of discord between the judgments  experts make when they are guided by their specially calibrated pattern-recognition faculties and the ones laypeople are constrained to form on the bias of their lay prototypes relating to the matters in question.

d. Cultural cognition fits this basic account. People gain access to what’s known to science through affinity networks that certify “who knows what about what.” Those networks are plural; but they usually converge in their certifications (ones that persistently misled their members on who knows what about what would not last long).  Sometimes, however, facts that admit of scientific investigation—like whether the earth is heating up, or whether the HPV vaccine will cause girls to engage in promiscuous unprotected sex—get invested with contentious social meanings that pit the certifying groups into a state of opposition. In that case, diverse people will be in a state of persistent disagreement about those facts—not because they lack scientific knowledge; they don’t have that on myriad other facts on which there is no such disagreement—but because the faculties they use (reliably, most of the time) to identify who knows what about what are generating conflicting answers across diverse groups.

3. Law is parallel in all respects.

a.  Legal reasoning consists in an expert system of pattern recognition.  This is what Llewellyn had in mind when he described “situation sense.” Llewellyn, it’s true, famously discounted the power of analytical or deductive reasoning to generate legal results. But for him the interesting question was how it was that there was such a high degree of predictability in the law, such a high degree of consensus among lawyers and judges, nonetheless. “Situation sense,” a perceptive faculty that is calibrated by education and professionalization and that reliably enables lawyers and judges to conform fact patterns to a common set of “situation types” (i.e., prototypes), was Llewellyn’s answer.

b.  Members of the public lack lawyers’ situation sense. They do not “understand legal reasoning” not because they are deficient in some analytical faculty but because they lack the specialized inventory of professional prototypes that lawyers enjoy, and thus do not see what lawyers see. If they are to converge on what lawyers know, then, they must do so through the use of some valid set of mediating prototypes that enable their pattern-recognition faculty reliably to apprehend “who knows what about what” in law.

c. Just as there are instances in which antagonistic cultural resonances block effective use of the mediating prototypes that laypeople use to discern expert scientific judgment, so there are ones in which antagonistic cultural resonances block effective use of mediating prototypes that laypeople must necessarily use to discern expert legal judgment. When that happens, there will be persistent conflict among diverse groups of people on whether legal controversies are being correctly or neutrally resolved.  See “They Saw a Protest.”

4. The law’s neutrality communication problem admits of the same solution as science’s expertise communication problem.

a. Public controversies over science are not intractable. They do not reflect inherent defects or flaws in science; nor do they reflect the (admitted) limits on the capacity of the public to comprehend what scientists know. Rather, they are a reflection of gaps or breakdowns in the mediating prototypes that  members of the public normally make reliable use of to discern who knows what about.  The science of science communication involves identifying those gaps and fixing them.

b. To the extent that the neutrality communication involves the same sort of difficult as the expertise communication problem, then it’s reasonable to surmise the neutrality communication problem is tractable. The idea that public conflict over law validity is an inescapable consequence of the indeterminacy of law and the resulting “ideological” nature of decisionmaking is as extravagant as saying that disagreements over science are based on the inherent “ideological bias” or indeterminacy of scientific methods. Members of the public necessarily apprehend the validity of law through mediating prototypes. Through scientific study, it should possible to identify what those mediating prototypes are, where the holes are gaps are in those prototypes, and how to remedy those gaps.

 c. The advent of the science of science communication began with the recognition that it was wrong to think there was no need for one. Doing valid science and communicating science to the public are different things. Doing valid science actually does involve communication, of course, of the sort that scientists engage in to share knowledge with each other. But that communication works by engaging the stock of prototypes to which the scientists’ faculty of expert pattern recognition is specifically calibrated. Supplying that information to the public doesn’t help them to know what scientists know—or see what scientists see—because they lack the scientists’ inventory of prototypes.  Effective public science communication, then, consist in supplying information that engages the mediating prototypes that enable nonexperts to reliably figure out who knows what about what. Like any other form of expert judgment, moreover, expert science communication involves the adroit use of pattern recognition faculties calibrated to prototypes that suit the task at hand.

d. The first step in the development of a science of legal validity communication must likewise be the recognition that there is a need for it. Legal professionals are in much broader agreement about what constitutes neutral or valid determination of cases than are ordinary members of the public. But just as the validity of science from the (pattern-recognition-informed) point of view of the scientist does not communicate the validity of science to the public, so the neutrality of law from the pattern-recognition-informed point of view of lawyers does not communicate the neutrality of law to laypeople. Judges communicate the bases of their decisions, of course. But the sort of communication that judges use to communicate the validity of their decisions is aimed at demonstrating the validity of their decisions to legal professionals; it does that by successfully engaging the prototypes that inform legal situation sense. That sort of communication won’t reliably enable members of the public to perceive the validity of the law, because the public lacks situation sense and thus cannot see what lawyers see.  Like the existence of public conflict over science, the existence of public conflict over law is a product of the breakdown of the mediating prototypes that members of the public must rely on to know who knows what about what. Dispelling the latter conflict, too, involves acquiring knowledge scientific knowledge about how to construct and repair mediating prototypes. And as with the communication of science validity, the communication of law validity will require the development of expert judgment guided by the adroit use of pattern recognition faculties calibrated specifically at that.

Thursday
Nov152012

Is cultural cognition the same thing as (or even a form of) confirmation bias? Not really; & here’s why, and why it matters  

Often people say, “oh, you're talking about confirmation bias!” when they hear about one of our cultural cognition studies.  That’s wrong, actually.

Do I care? Not that much & not that often. But because the conflating of these two dynamics can actually interfere with insight, I'll spell out the difference.

Start with a Bayesian model of information processing—not because it is how people do or (necessarily, always) should think but because it supplies concepts, and describes a set of mental operations, with reference to which we can readily identify and compare the distinctive features of cognitive dynamics of one sort or another.

Bayses’s Theorem supplies a logical algorithm for aggregating new information or evidence with one’s existing assessment of the probability of some proposition. It says, in effect, that one should update or revise one’s existing belief in proportion to how much more consistent the new evidence is with the proposition (or hypothesis) in question than it is with some alternative proposition (hypothesis).

Under one formalization, this procedure involves multiplying one’s “prior” estimate, expressed in odds that the proposition is true, by the likelihood ratio associated with the new information to form one’s revised estimate, expressed in odds, that the proposition is true.  The “likelihood ratio”—how many times more consistent the new information is with the proposition in question—represents the weight to be assigned to the new evidence. 

An individual displays confirmation bias when she selectively credits or discredits evidence based on its consistency with what she already believes. In relation to the Bayesian model, then, the distinctive feature of confirmation bias consists in an entanglement between a person’s prior estimate of a proposition and the likelihood ratio she assigns to new evidence: rather than updating her existing estimate based on the new evidence, she determines the weight of the new evidence based on her prior estimate.  Depending on how strong the degree of this entanglement is, she’ll either never change her mind or won’t change it as quickly as she would have if she had been determining the weight of the evidence on some basis independent of her “priors.”

Cultural cognition posits that people with one or another set of values have predispositions to find particular propositions relating to various risks (or related facts) more congenial than other propositions. They thus selectively credit or discredit evidence in patterns congenial to those predispositions. Or in Bayesian terms, their cultural predispositions determine the likelihood ratio assigned to the new evidence.  People not only will be resistant to changing their minds under these circumstances; they will also be prone to polarization—even when they evaluate the same evidence—because people’s cultural predispositions are heterogeneous.

See how that’s different from confirmation bias? Both involve conforming the weight or likelihood ratio of the evidence to something collateral to the probative force that that evidence actually has in relation to the proposition in question. But that collateral thing is different for the two dynamics: for confirmation bias, it’s what someone already believes; for cultural cognition, it’s his or her cultural predispositions.

But likely you can also now see why the two will indeed often look the “same.” If as a result of cultural cognition, someone has previously fit all of his assessments of evidence to his cultural predispositions, that person will have “priors” supporting the proposition he is predisposed to believe. Accordingly, when such a person encounters new information, that person will predictably assign the evidence a likelihood ratio that is consistent with his priors. 

However, if cultural cognition is at work, the source of the entanglement between the individuals’ priors and the likelihood ratio that this person is assigning the evidence is not that his priors are influencing the weight (likelihood ratio) he assigns to the evidence. Rather it is that the same thing that caused that individual’s priors—his cultural predisposition—is what is causing that person’s biased determination of the weight the evidence is due. So we might want to call this "spurious confirmation bias."

Does this matter?  Like I said, not that much, not that often.

But here are three things you’ll miss if you ignore everything I just said.

1. If you just go around attributing everything that is a consequence of cultural cognition to confirmation bias, you will not actually know—or at least not be conveying any information about—who sees what and why. A curious person observes a persistent conflict over some risk—like, say, climate change; she asks you to explain why that group sees things one way that another. If you say, “because they disagree, and as a result construe the evidence in a way that supports what they already believe,” she is obviously going to be unsatisfied: all you’ve done is redescribe the phenomenon she just asked you to explain.  If you can identify the source of the  bias in a person’s cultural predisposition, you’ll be able to give this curious questioner an account of why the groups found their preferred beliefs congenial to begin with—and also who the different people in these groups are independently of what they already believe about the risk in question.

2. If you reduce cultural cognition to confirmation bias, you won’t have a basis for predicting or explaining polarization in response to a novel risk.  Before people have encountered and thought about a new technology, they are unlikely to have views about it one way or the other, and any beliefs they do have are likely to be noisy—that is, uncorrelated with anything in particular. If, however, people have cultural predispositions on risks of a certain type, then we can predict such people will, when they encounter new information about this technology, assign opposing likelihood ratios to it and end up polarized!

CCP did exactly that in a study of nanotechnology. In it, we divided subjects who were largely unfamiliar with nanotechnology into two groups, one of whom was supplied no information other than a very spare definition and another of whom was supplied balanced information on nanotechnology risks and benefits. Hierarchical individualists and egalitarian communitarians in the “no information” group had essentially identical views of the risks and benefits of nanotechnology. But those who were supplied with balanced information polarized along lines consistent with their predispositions toward environmental and technological risks generally.

“Confirmation bias” wouldn’t have predicted that; it wouldn’t have predicted anything at all.

3. Finally and likely most important, if you stop understanding what the causal mechanisms are at the point at which cultural cognition looks like confirmation bias, you won’t be able to formulate any hypotheses about remedies.

Again, confirmation bias describes what’s happening—people are fitting their assessment of evidence to what they already believe. From that, nothing in particular follows about what to do if one wants to promote open-minded engagement with information that challenges peoples’ existing perceptions of risk. 

Cultural cognition, in contrast, explains why what’s happening is happening: people are motivated to fit assessments of evidence to their predispositions.  Based on that explanation, it is possible to specify what’s needed to counteract the bias: ways of presenting information or otherwise creating conditions that erase the antagonism between individuals’ cultural predispositions and their open-minded evaluation of information at odds with their priors.

CCP has done experimental studies showing how to do that.  One of these involved the use of culturally identifiable experts, whose credibility with lay people who shared their values furnished a cue that promoted open-minded engagement with information, and hence a revision of beliefs about, the risks of the HPV vaccine.

In another, we looked at how to overcome bias on climate change evidence.  We surmised that the positive that individuals culturally predisposed to dismiss evidence of climate change engaged that information more open-mindedly when they learned that geoengineering and not just carbon-emission limits were among the potential remedies. The cultural resonances of geoengineering as a form of technological innovation might help to offset in hierarchical individualists (the people who really like nanotechnology when they learn about it) the identity-threatening resonances associated with climate change evidence, the acceptance of which is ordinarily understood to require limiting technology, markets and industry. Our finding corroborated that surmise: individuals who learned about geoengineering responded more open-mindedly to evidence on the risks of climate change than those who first learned only about the value of carbon-emission limits.

Nothing in the concept of “confirmation bias” predicts effects like these, either, and that means it’s less helpful than an explanation like cultural cognition if we are trying to figure out what to do to solve the science communication problem.

Does this mean that I or you or anyone else should get agitated when people conflate cultural cognition and confirmation bias? 

Nope. It means only that if there’s reason to think that the conflation will prevent the person who makes it from learning something that we think he or she would value understanding, then we should help that individual to see the difference with an explanation akin to the one I have just offered.

Some references

Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias. The Quarterly Journal of Economics 114, 37-82 (1999).

Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Sunday
Nov112012

NARP: National Adaptation and Resiliency Plan -- it both pays for & "frames" itself

Imagine what NYC & NJ might look like today if we had had a "National Adaptation and Resiliency Plan" as part of the stimulus measures passed by Congress in 2008 & 2009....

Or if that's too hard to do, here's something to help you imagine what things will look like -- over & over again, for cities spanning the gulf coast & stretching up the northeast corridor --if we don't do it now:

A national program to fund the buiding of sea walls, installation of storm surge gates, "hardening" of our utility & transportation infrastructure & the like makes real economic sense.

Not only would such a program undeniably generate a huge number of jobs. It would actually reduce the deficit!

The reason is that it costs less to adopt in advance the measures that it will take to protect communities from extreme-weather harm than it will cost in govt aid to help unprotected ones recover after the fact.  Measures that likely could have contained most of the damage from Sandy inflicted on NYC & NJ, e.g., could in fact have been adopted at a fraction of what must now be spent to clean up and repair the damage.

Here's another thing: People of all political & cultural outlooks are already engaged with the policy-relevant science on adapation and are already politically committed to acting on it. 

There's been a lot of discussion recently about how to "frame" Sandy to promote engagement with climate science.

Well, there's no need to resort to "framing" if one focuses on adaptation. How to deal with the extremes of nature is something people in every vulnerable community are already very used to talking about and take seriously. From Florida to Virginia to Colorado to Arizona to California to New York--they were already talking about adaptation before Sandy for exactly that reason. 

Nor does one have to make any particular effort to recruit or create "credible" messengers to get people to pay attention to the science relating to adaptation. They are already listening to their neighbors, their municipal  officials, and even their utility companies, all of whom are telling them that there's a need to do something, and to do it now.

During the campaign (thank goodness it's over!), we kept hearing debate about who "built that."
But everyone knows that it's society, through collective action, that builds the sort of public goods needed to protect homes, schools, hospitals, and business from foreseeable natural threats like floods and wildfires.

Everyone knows, too, that it's society, through collective action, that rebuilds communities that get wiped out by these sorts of disasters.

The question is not who, but when -- a question the answer to which determines "how much."
Let's NARP it in the bud!

 

Sunday
Nov112012

New paper: Cognitive Bias & the Constitution of Liberal Republic of Science

So here's a working paper that knits together themes that span CCP investigations of risk perception, on one hand, & of legal decisionmaking, on other, & bangs the table in frustration on what I see as the "big" normative question: what sort of posture should courts, lawmakers & citizens generally adopt toward the danger that cultural cognition poses to liberal principles of self-government? I don't really know, you see; but I pretend to, in the hope that the deficiencies in my answers combined with my self-confidence in advancing them will provoke smart political philosophers to try to do a better job.

Abstract: 
This essay uses insights from the study of risk perception to remedy a deficit in liberal constitutional theory—and vice versa. The deficit common to both is inattention to cognitive illiberalism—the threat that unconscious biases pose to enforcement of basic principles of liberal neutrality. Liberal constitutional theory can learn to anticipate and control cognitive illiberalism from the study of biases such as the cultural cognition of risk. In exchange, the study of risk perception can learn from constitutional theory that the detrimental impact of such biases is not limited to distorted weighing of costs and benefits; by infusing such determinations with contentious social meanings, cultural cognition forces citizens of diverse outlooks to experience all manner of risk regulation as struggles to impose a sectarian orthodoxy. Cognitive illiberalism is a foreseeable if paradoxical consequence of the same social conditions that make a liberal society conducive to the growth of scientific knowledge on risk mitigation. The use of scientific knowledge to mitigate the threat that cognitive illiberalism poses to those very conditions is integral to securing the constitution of the Liberal Republic of Science. 
Wednesday
Nov072012

Hey, the problem *isn't* that people are irrational, proof #6276: Prop 37 fails in Calif

From Andy Revkin on dotearth:

California’s Proposition 37, or #Prop37 as it was known on Twitter, failed last night by a substantial margin — 53 percent to 47 percent. The ballot initiative would have required labeling for some genetically engineered foods. (Click here for an illuminating interactive county-by-county map of the vote. Upscale urban and coastal regions wanted it; inland areas mostly rejected it.)

As I said on Tumblr this morning, I’m glad that the sloppyunscientific andprotectionist initiative failed, but glad an important discussion of transparency in food sourcing has begun....

There’s more on Dot Earth on relevant issues....

Tuesday
Nov062012

It's journal club time (episode 391)! Lewandowsky et al. on scientific consensus

Thanks to the many friends who sent me emails, made late night phone calls, or showed up at my front door (during the time when the storm had knocked out internet & phone service) to make sure I saw Lewandowsky, Gignac, & Vaughan's The pivotal role of perceived scientific consensus in acceptance of science in Nature Climate Change. It's a really cool paper!

LGV present observational and experimental evidence relating to public perceptions of scientific consensus on climate change and other issues. CCP did a study on scientific consensus a couple yrs ago,  -- Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011)--which is one of the reasons my friends wanted to be sure I saw this one.  

The paper presents two basic findings. I'll say something about each one.

Finding 1: Perceptions of scientific consensus determine public beliefs about climate change--and in essentially the same way that they determine it on other risk issues.

In the observational study, the respondents (200 individuals who were solicted to participate in person in downtown Perth, Australia) indicated their beliefs about (a) the link between human CO2 emissions and climate change (anthropogenic global warming or "AGW"), (b) the link between the HIV virus and AIDS, and (c) the link between smoking and lung cancer.  The respondents also estimated the degree to which scientists believed in such links. LGV then fit a structural equation model to the data and found that a single "latent" factor -- perception of scientific consensus with respect to the link in question -- explained the respondents' beliefs, and "fit" the data better than models that posited independent relationships between respondents' beliefs and their perceptions of scientific consensus on these matters. So basically, people believe what they think experts believe about all these risks.

Surprised? "Of course not. That's obvious!"

Shame on you, if that is how you reacted. It would have been just as "obvious!" I think, if they had found that perceptions of scientific consensus didn't explain variance in perceptions of beliefs in AGW, or that such perceptions bear a relationship to AGW distinct from the ones on other risks. That's because lots of people believe that skepticism about climate change is associated with unwillingness to trust or believe scientists. If that were true, then then the difference between skeptics and believers wouldn't be explained by what they think scientific consensus is; it would be explained by their willing to defer to that consensus.

Most social science consists in deciding between competing plausible conjectures. In the case of climate change conflict, two plausible conjectures are (1) that people are divided on the authority of science and (2) that people agree on the authority of science but disagree about what science is saying on climate change. LGV furnish more evidence more supportive of (2) than (1). (BTW, if you are curious about how divided Australians are on climate change, check this out.)

from Kahan, Jenkins-Smith & Braman (2011)In that regard, moreover, their finding is exactly in line with the CCP one. Using a large (N = 1500) nationally representative sample of US adults, we measured perceptions of scientific consensus on climate change, nuclear power risks, and gun control.  These are highly contentious issues, on which American citizens are culturally divided. Nevertheless, we found that no cultural group perceives that the view that is predominant among its own members is contrary to scientific consensus. (We also found that all the groups were as likely to be mistaken as correct about scientific consensus across the run of issues, at least if we treated the "expert national consensus reports" of the National Academy of Sciences as the the authority on what that consensus is.)

So next time you hear someone saying "climate skeptics are anti-science," "the climate change controversy reflects the diminishing authority of/trust in scientists" etc., say "oh, really? What's your evidence for that? And how does it relate to the LGV and CCP studies?"

Finding no. 2: When advised that there is overwhelming scientific consensus in favor of AGW, people are more likely to believe in AGW -- and this goes for "individualists," just like everyone else.

The experiment subjects (100 individuals also solicited to participate in person in Perth, Australia) indicated their AGW beliefs after being randomly assigned to one of two conditions: a "consensus information" group, which was advised by the experimenters that there is overwhelming scientific consensus (97%) on AGW; and a "no information" group, which was not supplied any information on the state of scientific opinion.  

LGV found, first, that subjects in the consensus-information group were more likely to express belief in AGW. This result adds even more weight to the surmise that popular division over climate change rests not on a division over the authority or credibility of scientists but on a division over perceptions of scientific consensus.

from Lewandowsky, Gignac & Vaugh (2012)Second, LGV found that the impact of consensus-information exposure had a stronger effect on subjects as their scores on a "free-market individualism" worldview measure increased. In other words, relative to their counterparts in the no-information condition, subjects who scored high in "individualism" were particularly likely to form a stronger belief in AGW when exposed to scientific-consensus information.

Although also perfectly plausible, this finding should definitely raise informed eyebrows.

Public opinion on climate change in  Australia, as in the US, is culturally divided.  Consistent with other studies, LGV found that individualism generally predicted skepticism about AGW.

We know (in the sense of "believe provisionally, based on the best available evidence and subject to any valid contrary evidence that might in the future be adduced"; that's all one can ever mean by "know" if one actually gets the logic of scientific discovery) that individualist skepticism toward AGW is not based on skepticism toward the authority of science. Both the observational component of the LGV study and the earlier CCP study support the view that individualists are skeptical because they aren't convinced that there is a scientific consensus on AGW.

Well, why? What explains cultural division over perceptions of scientific consensus?

One conjecture -- let's call it "cultural information skew" or the CIS -- would be that individualists and communitarians (i.e., non-individualists) are exposed to different sources of information, and the information the former receives represents scientific consensus to be lower than does the information the latter receives.

But another conjecture -- call it "culturally biased assimilation" or CBA -- would be that individualists and communitarians are culturally predisposed to credit evidence of scientific consensus selectively in patterns that fit their predisposition to form and maintain beliefs consistent with the ones that prevail within their cultural groups. CBA doesn't imply that individualists and communitarians are necessarily getting the same information. But it would predict disagreement on what consensus is even when people with those predispositoins are supplied with the same evidence.

CBA is one of the mechanisms comprised by cultural cogniton.

from Kahan, Jenkins-Smith, Braman (2011)The same CCP study on scientific consensus furnished experimental evidence supportive of CBA. When subjects were asked to assess whether a scientist (one with elite credentials) was an "expert"-- one whose views should be afforded weight -- subjects tended to say "yes" or "no" depending on whether the featured scientist was depicted as espousing the position consistent with or opposed to the one that predominated among people who shared the subjects' values. 

In other words, subjects recognized the positions of elite scientists as evidence of what "experts" believe selectively, in patterns that fit their cultural predispositions on the risk issues (climate change, nuclear power, and gun control) in question. If this is how people outside the lab treat evidence of what "expert consensus" is, they can be expected to end up culturally divided even when they are exposed to the very same evidence.

At least one more research team has made a comparable finding. Adam Corner, Lorraine Whitmarsh, &  Dimitrios Xenias published an excellent paper a few months ago in Climatic Change that showed that subjects displayed biased assimilation with respect to claims made in newspaper editorials, crediting or discrediting them depending on whether the claims they made were consistent with what the subjects already believed about AGW. That's not culturally biased assimilation necessarily but the upshot is the same:  one can't expect to generate public consensus simply by bombarding people with "more information" on scientific consensus.

The LGV finding, though, appears inconsistent with biased assimilation, cultural or otherwise. The subjects in the consensus-information group were being supplied with evidence -- in the form of information provided by experimenters -- that suggested scientific consensus on AGW is very high (higher, apparently, than even subjects who believe in AGW tend to think).

The CBA prediction would be that more individualistic subjects would simply dismiss such evidence as non-credible -- in the same way that subjects in the CCP study rejected the credibility of scientists who furnished them with information contrary to their cultural predispositions. Having been given no credible information in support of revising their assessment of scientific consensus, LGV's individualist subjects would not (under the CBA view) be expected to revise their assessments of AGW.

But apparently they did! That's a result more in keeping with the "information skew" (CIS) account of why individualists disagree with communitarians.  So it turns out after all that all we need to do is un-skew things. As LGV put it, their study "underscores the vital role of highlighting a scientific consensus when communicating scientific facts," particularly when the underlying issues are "difficult to grasp or are hotly debated or challenge people’s world views."

So do I "accept" LGV as evidence against CBA, and as evidence for being less skeptical about a communication strategy that focuses on simply "highlighting scientific consensus"?  For sure!

But I don't see the evidence as super strong -- and certainly not strong enough to change my mind on these matters given the sum total of the evidence, including but not limited to the previous CCP & Corner et al. studies. In Bayesian terms, I give LGV a likelihood ratio of 0.77 in favor of CBA (or 1.3 in favor of the alternative, CIS hypothesis).

The reason I am not inclined to assign more decisive weight to the LGV finding is that I'm not convinced that people in the real world will be nearly so willing to accept real-world information on scientific consensus as the LGV study apparently were to accept the LGV experimenters' representations.

If individualists in the real world were that receptive to information "highlighting" scientific consensus, I'm very confident they would have gotten the message by now. You really have to be off the grid -- off the planet, even -- not to have heard over & over & over that there is "overwhelming scientific consensus" on AGW. One either accepts that information when it is presented -- on tv, in newspapers, by people one talks to on the street corner -- or one just doesn't. And obviously a good segment of the population just doesn't.

Basically, I'm taking the fact that "some people credit reports of scientific consensus on AGW yet many don't" as the starting point for investigation, and trying to figure out who sees what & why. Again, the CCP experimental result is evidence, in my view, that people are motivated to selectively credit or dismiss evidence of scientific consensus in ways that fit their cultural prepositions (CBA). 

Now in fact, I am surprised that individualistic subjects in the LGV study apparently did put so much confidence in the word of the experimenters. But that they did makes me question whether the situation those subjects were in is really comparable to one of people who are engaging real-world information sources.

I'm inclined to say that in this regard I think the CCP experiment was more realistic. We -- the experimenters -- made no representations to our subjects about the state of scientific consensus. Rather, we showed them some evidence -- a scientist taking a position -- and let them decide for themselves what weight to attach to it. They told us that they viewed what we were showing them as valid evidence of "what experts believe" only when that evidence was consistent with the position that predominated in their group.  

I think that's closer to the situation that we can anticipate people will be in outside the lab when real-world people -- from journalists to advocates to individual scientists to their fellow citizens -- try to "highlight" AGW consensus to them. The expectation that people in that setting will be dismissive toward representations that challenge their predispositions is strongly supported by Corner, Whitmarsh, & Xenias (2012) as well.

Actually, LGV come pretty darn close to saying they agree with this point. They write:

At first glance, our results challenge the results of Kahan and colleagues, that perceived consensus operates like any other fact that is equally subject to dismissal as other evidence surrounding AGW. However, on closer inspection, the study by Kahan did not provide socially-normative information about a consensus (that is, ‘97 out of 100’) but instead presented participants with an informational vignette, attributed to a fictional expert, that either described the risk from climate change or downplayed it. Because this manipulation provided anecdotal rather than social-norming information, it is not surprising that participants rated the source as less trustworthy if the message was worldview dissonant. Normative information, by contrast, is widely assumed to be more resilient to ideologically-motivated dismissal ....

Right: if one provides information that people view as "socially normative" -- i.e., as worthy of being believed --  they'll accept it. But the issue is how to make people view that information as "socially normative" when it is contrary to their cultural predispositions? I just find it implausible to believe that people in the world are as open to real-world evidence (including media accounts & the like) purporting to tell them that they & all their peers are wrong about scientific consensus on AGW as the subjects in the LGV experiment apparently were when the experimenters told them  "97 out of 100 climate scientists believe in AGW." 

My skepticism, however, is not a reason for anyone, including me, to dismiss the significance of LGV's experimental finding.  Only a person who doesn't really understand how empirical study enlarges knowledge would think that one can find a study compelling, insightful, and challenging only if one is "convinced" by the conclusion.

Indeed, if you get how empirical inquiry works, then you'll know how I or LGV or anyone else should respond to the questions I've raised: not by putting this paper aside, but by getting a firm grip on it & trying to reciprocate its contribution to knowledge by doing additional studies that take aim at exactly what is giving me pause here.

E.g., if one embedded the statement "97 out of 100 scientists accept AGW" in a NY Times newspaper story, would individualists react the same way as the ones in this study did? Would they be just as likely to believe that representation as they would be to accept the representation that "only 3" -- or more plausibly for experimental purposes, "only 43" or "only 47"--"of 100" scientists believe in AGW? Would egalitarian communitarian subjects likewise credit just as readily either representation on the state of consensus on AGW? Same for safety of nuclear power?

Show me that -- a result that essentially replicates LGV in a in the Corner, Whitmarsh, & Xenias (2012) design -- & I'll definitely be revising my priors on CBA by a humongous amount!

But I won't have to wait for that result (or the opposite of it) to get the benefits of both knowing more and having more to puzzle over as a result of this paper.

I think it's cool! Read it & tell me what you think!

References

Corner, A., Whitmarsh, L. & Xenias, D. Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Climatic Change (2012), on-line advance publication at http://dx.doi.org/10.1007/s10584-012-0424-6

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011)

Lewandowsky, S., Gignac, G.E. & Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change (2012).

Monday
Nov052012

2 tropes in Proposition 37 debate

As election day approaches, citizens around the Nation are excitedly debating the most consequential of the issues to be resolved on Tuesday: whether Californians should vote "yes" or "no" on Proposition 37, which would require that GM foodstuffs bear a label that states "Genetically Engineered'" or "Produced with Genetic Engineering" in "clear and conspicuous words ... on the front of the package."

I've already explained, in 3 previous posts (1, 2, 3) the general reasons why I don't like Proposition 37.

In fact, I'm certainly open to counter-arguments and have been following the back-and-forths to see if I catch sight of anything in the debate that gives me reason to reconsider.

So far, I haven't.

The main thing I've seen from the proponents are two recurring tropes -- argument bits, essentially. They seem to be effective debating points -- or at least they draw many appreciative nods and cheers from those who already accept the "yes" position -- but they aren't helping me at all to see what I might be missing.  

I'm going to explain why I find the tropes unhelpful, not because I want to change anyone's mind but because I do want those who might want to change mine to see why these points just aren't responsive to my concerns.

Trope 1: "All proposition 37 does is furnish information. What could possibly be wrong with that?"

My reaction:  

a. The communicative impact of the label isn't necessarily confined to the words on it ("GENETICALLY ENGINEERED," on "the front of the package" etc); it includes "there's a reason for you to worry about this -- or else we wouldn't bother to tell you, would we?..."  People process information that way, & it makes perfectly good sense for them to do so. So if in fact there's not something for them to worry about, then labels like this either risk steering them away from things that aren't dangerous or diluting the significance they give to warnings. There's ample literature on both effects--and on how complicated it is to design labels that inform rather than misinform consumers. As a busy person who makes sense of information in the same way as everyone else, I prefer a more considered & systematic approach to how my "warning environment" is populated. 

b. Even more important, the labeling referendum is a communicative focal point for messages that are radiating with cultural "us vs. them/whose side are you on" meanings of the sort that make people see (literally, see; cultural cognition works that way) risks in a way that divides them into warring tribes. The proponents of Proposition 37 are already making unfounded claims about the science on GM food risks, and I worry that passage of this provision will be used strategically and rhetorically as part of a continuing campaign to create a fog (smog, even) of motivated reasoning that interferes with the ability of diverse groups to recognize and converge on the best available evidence as it accumulates.

Stigmatizing a technology can degrade the quality of science-communication environment, making it harder for people to communicate constructively w/ one another & figure out what to do. That's happened in the US with various technologies, including nuclear power. It has happened specifically w/ GM foods in Europe. I'm worried about that here.

c. I accept, too, that some of my fellow citizens are simply interested in knowing whether GMOs are in food, either because they are worried about as-yet undetected health risks or because philosophically/morally they think there is something untoward about genetic engineering.  But they can get that information without making me and others bear the empty-alarm clutter of a state-mandated advisory label, since non-GMO producers are free to put a "GMO-free label" on their products.

Trope 2: "The issue is democracy & scientifically informed decisionmaking!"

My reaction:

Sigh...

Like you, I'm for democracy. Like you, I'm for informed decisionmaking, individual & collective.

That's why I'm worried about Proposition 37.  

The capacity of a democracy to make enlightened decisions turns on the quality of its science communcation environment. The issue here is whether Proposition 37 is a kind of pollution of that environment. I think it is.

The scientific jury is always out on risk -- which is to say, we must always and forever continue to collect evidence and be open to the best scientifically available information on the hazards we face and how to abate them, both with respect to new technologies and existing ones.

But as we can see from the contentious, unconstructive, unenglightened and unenlightening disputes today over climate change, it is a huge huge mistake to take for granted the conditions that assure we'll be able to recognize what the scientific jury is saying as it makes its reports. My concern is that Proposition 37 is part and parcel of a style of political advocacy that destroys those very conditions. 

Again, I might be wrong here, and I'd be interested in figuring out why plenty of reflective people disagree with me.  

But when those who support Prop 37 intone "democracy ... freedom ... right to know ... information!" -- not to mention "profit mongers vs. the people!" etc. (the campaign for Propsition 37 is  funded by industries interested in making profits, too; that's just the way it goes) -- the only thing I learn is that they either aren't getting or don't care what worries reflective people on the other side.

Thursday
Nov012012

A "teachable moment" for science communication: Mayor Bloomberg shows how it's done

Our climate is changing. And while the increase in extreme weather we have experienced in New York City and around the world may or may not be the result of it, the risk that it may be — given the devastation it is wreaking — should be enough to compel all elected leaders to take immediate action.

Reported in latest dotearth post on the foreseeably polarizing "Sandy-causation-teaching-moment" meme.

That said, I do wish Bloomberg would stop trying to make people drink small sodas & breast feed their infants!

BTW, by calling Bloomberg's statement a "teachable moment" for science communication, I recognize that I risk insulting the many  many many people who have been urging that Sandy be seized as a "teachable moment" for those communicating climate science to the public. The problem with this phrase is that that it conveys a certain attitude; it comes off sounding as if one views those who need to be "taught" something as dimwitted school children. I'd recommend a different "strategy" -- like, say, treating (even truly regarding) the people to whom one is purporting to communicate science as thinking citizens who are entitled to get information in a form and under conditions that enable them to use their reason.

I promise not to use this obnoxious idiom anymore if you do. Deal?

Wednesday
Oct312012

"Climate change caused ...": linguistics, empirics, & reasoned discourse

Here are some reflections occasioned by (1) Andy Revkin's excellent dotearth blog post on the relationship between climate change and Sandy; (2) the anger that Revkin's post aroused among at least some climate-change-policy advocates, who proposed the laughable but still disturbing idea that Revkin be publicly censured in some way (others, it should be noted, responded in a critical but reasoned way to issues about advocacy and science information that are admittedly complex); and (3) a columin in the Huffington Post by George Lakoff, who has figured out that the problem here is confusion over language, which if used properly resolves important practical, empirical issues without the need to consult evidence (including evidence of how the public engages with climate science)...

There are 3 issues here: (1) one relating to whether "climate changed caused x" in general; (2) another to whether it caused Sandy in particular; and (3) a final one to the polemics of "caused by climate change..."

1. General.  Hansen et al. are much more helpful than Lakoff on this issue. When we ask the question, "did climate change cause x," the issue is not "semantic" but practical & empirical: we want to understand what the physical relationship is between climate change & particular events &  what effect to assign to particular events in trying to asses the impact of climate change.  The issue isn't what word or phrase to use; it's what to do. Hansen et al. tell us *exactly* what we need to know: climate change shifts the normal distribution of weather events, making certain outcomes more likely & hence more frequent than they otherwise would have been. Accordingly, we can say that climate change made a particular event "y times more likely" than it would have been, if that's useful (Hansen et al. identify events for which y is very very very high!); but more importantly we can speak instructively to the question of what the impact of climate change is in practical terms-- "more events like this per yr/decade, which will cost $z billion, kill q x 10^3 people  etc".  

It's ironic that Lakoff refers to black-lung & cigarettes. The law got over being confused on causation for mass torts when it stopped trying to wring practical guidance from vague, impressionistic, & ultimately question-begging/-obscuring concepts like "direct vs. indirect," & w/ the help of toxicologists, epidemiologists, economists & others started to think in practical, empirical terms akin to those being proposed by Hansen et al. At that point, the law adopted doctrines that made companies that manufacture products that increase the incidence of some harm pay the price of that extra harm w/o getting tied in conceptual knots about whether this particular actor "caused" that particular injury.

 2. Sandy. I myself am still not sure whether climate change increased the likelihood of a weather event like Sandy. I would have thought the answer was clearly yes. But Revkin's excellent dotearth post showed me that there is at least a division of opinion on this among experts; whether Sandy belongs to the class of events the likelihood of which was increased to a very high degree by climate change is not as cut & dried, apparently, as whether climate change increased the likelihood of, say, persistence of summer heat waves, or wild fires in western US.

But I might be misunderstanding & I'm sure there is more to day. I'd like to hear it. But I won't if people agree to talk the way Lakoff does: whatever one thinks of his "systemic causation" linguistically, the way he uses it ("Global warming systemically caused the huge and ferocious Hurricane Sandy...") begs the practical/empirical question.

3. Polemics of "causation." The furor over Revkin's column & over the use of the word "cause" in general is bound up with strategic political & moral issues. Many of those who want to create public engagement with climate change believe that it is essential to be able to say "climate change caused x" extreme event. I can see why; "causation" implies responsibility, and we are  motivated (very appropriately) to regulate and otherwise hold accountable those responsible for harm. But is it really clear that one can't get the responsibility/accountability point across by saying "climate change makes x event times more likely"-- particularly where y is astronomically high? Or by saying (if we have evidence for saying it) that "climate change means we can expect to see an x event -- one that kills q x 10^3 people & costs $billion -- every 2/5/10 yrs" etc? I'm not sure; be interesting to try to test these things. Lakoff criticizes Hansen's communication skills, but doesn't himself present any evidence to support his assertion that his own proposed way of using terms would promote public comprehension or engagement. I, at least, can think of some pretty plausible counter-hypotheses.

One thing I am sure about, though: those who are insisting that science journalists or others use the term "causation" in a way that avoids even asking the practical, empirical question & getting an answer to it (not to mention those advocates who have proposed attacking Revkin as a way to create a "teaching moment" for journalists & other reflective participants in public discussion) believe in a form of democratic deliberation that involves a smaller role for appeals to citizens' reason than I am comfortable with. 


 
Monday
Oct292012

The science communication problem: one good explanation, four not so good ones, and a fitting solution

I was on a panel Saturday on “public policy and science” at the CSICon conference in Nashville. My friend Chris Mooney was on it, too. I didn’t speak from a text, but this is pretty close to what I rember saying; slides here.

I’m going to discuss the “science communication problem” – the failure of sound, widely disseminated science to settle public controversies over risks and other policy-relevant facts that admit of scientific investigation.

What makes this problem perplexing isn’t that we have no sensible explanation it. Rather it’s that we have too many.

There are always more plausible accounts of social phenomena than are actually true.  Empirical observation and meansurement are necessary--not just to enlarge collective knowledge but also to steer people away from deadends as they search for effective solutions to the society’s problems.

In this evidence-based spirit, I’ll identify what I regard as one good explanation for the science communication problem and four plausible but not so good ones. Then I’ll identify a “fitting solution”—that is, a solution that fits the evidence that makes the good explanation better than the others.

One good explanation: identity-protective cognition

Identity-protective cognition (a species of motivated reasoning) reflects the tendency of individuals to form perceptions of fact that promote their connection to, and standing in, important groups.

There are lots of instances of this. Consider sports fans who genuinely see contentious officiating calls as correct or incorrect depending on whether those calls go for or against their favorite team.

The cultural cognition thesis posits that many contested issues of risk—from climate change to nuclear power, from gun control to the HPV vaccine—involve this same dynamic. The “teams,” in this setting, are the groups that subscribe to one or another of the cultural worldviews associated with “hierarchy-egalitarianism” and “individualism-communitarianism.”

CCP has performed many studies to test this hypothesis. In one, we examined perceptions of scientific consensus. Like fans who see the disputed calls of a referree as correct depending on whether they favor their team or its opponent, the subjects in our study perceived scientists as credible experts depending on whether the scientists’conclusions supported the position favored by members of the subjects’ cultural group or the one favored by the members of a rival one on climate change, nuclear power, and gun control.

Not very good explanation # 1: Science denialism

“Science denialism” posits that we see disputes over risks in the US because there is a significant portion of the populatin that doesn’t accept that the authority of science as a giude for policymaking.

The same study of the cultural cognition of scientific consenesus suggests that this isn’t so. No cultural group favors policies that diverge from scientific consensus on climate change, nuclear power, or gun control. But as a result of idenity-protective cognitoin, they are culturally polarized over what the scientific consensus is on those issues.

Moreover, no group is any better at discerning what scientific consensus is than any other. Ones that seem to have it right, e.g., on climate change are the most likely to get it wrong on deep geologic isolation of nuclear wastes, and vice versa.

Not very good explanation #2: Misinformation

I certainly don’t dispute that there’s a lot of misinformation out there. But I do question whether it’s causing public controversy over policy-relevant science. Indeed, causation likely runs the other way.

Again, consider our scientific consensus study. If the sort of “biased sampling” we observed in our subjects is typical of the way people outside the lab assess evidence on culturally contested issues, there won’t be any need to mislead them: they’ll systematiclly misinform themselves on the state of scientific opinion.

Still, we can be sure they’ll very much appreciate the efforts of anyone who is willing to help them out. Thus, their motivation to find evidence supportive of erroneous but culturally congenial beliefs will spawn a cadre of misinformers, who will garner esteem and profit rather than ridicule for misrepresenting what’s known to science.

The “misinformation thesis” has got things upsidedown.

Not very good explanation #3: “Bounded rationality”

Some people blame controversy over policy-relevant science on deficits in the public’s reasoning capacities.  Ordinary members of the public, on this view, know too little science and can’t understand it anyway because they use error-prone, heuristic stratetgies for interpsteing risk information.

Plausible, sure. But wrong, it turns out, as an explantion for the science communication problem: higher levels of science literacy and quantiative reasoning ability, a CCP study found, don’t quiet cultural polarization on issues like climate change and nuclear power; they magnify it.

Makes sense given identity-protective cognition. People who are motivated to form perceptions that fit their cultural identities can be expected to use their greater knowledge and technical reasoning facility to help accomplish that—even if generates erroneous beliefs about societal risks.

Not very good explanation #4: Authoritarian personality

The original authoritarian-personality of Adorno and his colleagues is often dismissed as an exercise in polemics disguised as social science.

But in recent years, a serious body of scholarship has emerged on correlations between dogmatism, closed-mindedness, and like personality traits, on the one hand, and conservative ideology, on the other. This work is insigthfully synthesized in Mooney’s The Republic Brain.

Does this revitalized “authoritarian personality” position explain public controversy over policy-relevant science?

It’s odd to think it does, given the role that identity-protective cognition plays in such controversies. Identity-protective cognition affects all types of perception (not just evaluations of evidence but brute sense impressions) relating to all manner of group affinities (not just politics but college sports-team allegiances). So why would the impact of identity-protective cognition be linked to a personality trait found in political conservatives?

But the point is, we should just test things – with valid study designs. Is the score on an “open mindendess” test a valid predictor of the sort of identity-protective reasoning that generates disputes over climate change, the HPV vaccine, nuclear power, guns?

I did a study recently designed to answer to this question. I examined whether liberal Democrats and conservative Republicans would displayed identity-protective cogntion in assessing evidence of the validity of the Cognitive Reflection Test (CRT)—which is in fact a valid measure of reflective, open-minded engagement with information.

They both did, and to the same degree.  When told that climate-skeptics got a higher CRT score (and here were presumably more open-minded), liberal Democrats were much less likely to view the test as valid than when they were told that climate-believes got a higher score (indicating they were more open-minded). The mirror-image pattern emerged for conservative Republicans.

What’s more, this effect was magnified by the disposition measured by CRT. That is, the subjects most inclined to employ conscious, reflective reasoning were the most prone to identity-protective cogniton—a result consistent with our findings in the Nature Climate Change study.

The  new “authoritarian personality” work might be identifying real differences between liberals and conservatives. But there’s little reason to think that what it’s telling us about them has any connection to identity-protective cognition—the dynamic that has been shown with direct evidence to play a significant role in the science communication problem.

A fitting solution: The separation of meaning and fact

Identity-protetive cognition is the problem. It affects liberals and conservatives, interferes with the judgment of even the most scientifically literate and reflective citizens, and feeds off even sound information as it creates an appetite for bad.

We need a solution, then, fitted to counteracting it. The one I propose is the formation of "science communication environment" protection capacity in our society.

Policy-consequential facts don’t inevitably become the source of cultural conflict. Indeed, they do only in the rare cases where they become suffosed with highly charged and antagonistic cultural meanings.

These meanings are a kind of pollution in the science communication environment, one that interferes with the usually reliable faculty ordinary people employ to figure out who knows that about what.

The sources of such pollution are myriad. Strategic behavior is one. But simple miscalculation and misadventure also play a huge role.

The well-being of a democratic society requires protecting the science communication environment from toxic meanings. We thus need to use our knowledege to understanding how such meanings are formed. And we need to devote our political resolve to developing procedures and norms that counteract the forms of behavior—intentional and inadvertent—that generate this form of pollution.

A wall of separation between cultural meaning and scientific fact is integral to the constitution of the Liberal Republic of Science.

Tuesday
Oct232012

WSMD? JA! Episode 2: cultural polarization on death penalty & climate change, 2006 vs. 2012

This is the second episode of CCP's already insanely popular new feature, "Wanna see more data? Just ask!" -- or "WSMD? JA!" (For contest rules and conditions, see here.)

In this episode, we answer the question, posed by students in Jeff Fagan's Capital Punishment seminar at Columbia Law School, "have you applied the cultural cognition scales to capital punishment?"

The answer is ... "why, yes -- in a survey just last month! And in another back in December 2006."

It's pretty interesting to compare the two sets of survey results, both on the issue of capital punishment and on the issue of global warming.

The items were the same in both the 2006 & 2012 studies:

How strongly do you oppose or support ... [1] stricter carbon emission standards to reduce global warming ... [2] the death penalty for murder?

The items both used six-point Likert measures: (1) strongly oppose; (2) modestly oppose; (3) slightly oppose; (4) slightly support; (5) modestly support; (6) strongly support.

Both of these issues -- the death penalty and climate change -- are ones on which hierarchical individualists and egalitarian communitarians are most intensely divided. I used ordered logit regression models to simulate how likely a "typical" hierarchical individualist (one whose scores on the "Hierarchy" and "Individualism" scales are both set at +1 standard deviation) and a "typical" egalitarian communitarian (-1 SD on each scale) were to "support" the indicated policy at some level (either slightly, moderately, or strongly) in the two studies.

Here's the outcome from the Dec. 2006 study (a 1500-person, nationally representive sample):

Basically, mirror images: the egalitarian communitarian is over 90% likely to support stricter carbon emission limits, and about 60% likely to support the death penalty; the hierarchical individualist is about 90% likely to support the death penalty and about 60% likely to support stricter carbon emission limits

Now here are the results in September 2012 (from an 800-person, nationally representative sample): 

Wow! An amazing increase in the degree of polarization on carbon-emission limits, with the egalitarian communitarian squeezing up close to 100% and the hierarch individualist dropping down to about 10% likely to support that policy. There's more polarization on the death penalty too: while the hierarch individualist of 2012 is hanging in at about 90% likely to support, the egalitarian communitarian is down from around 60% in 2006 to around 40% today.

I didn't expect to see such stark results -- on either issue, really.

Maybe I should have? On climate change, the common view is that the issue has become increasingly partisan. Remember, too, that 2006 was the year that Al Gore's movie came out, an event that many see as having helped brand the climate change issue ideologically.

I had thought the idea that climate change polarization was more recent was overstated. Well, I was right to remember climate change being highly polarized, culturally speaking, in 2006, but these data support the view that people (ordinary, not particulary partisan ones, remember) are much much more divided now! (Some think that's changing; yet our studies over the last yr haven't shown any abatement in cultural polarization.)

On capital punishment, it's pretty well known that support for the death penalty is generally declining. Consider this trend in Gallup's national polling:

The divide was 65% for, 28% against in Oct. 2006. Five years later, the divide had narrowed to 61% for to 35% against. That's something, and if you go back a bit further, the contemporary trend seems even more noticeable, albeit modest.

From our data, it looks like most of the action is coming from egalitarian communitarians, who moved from being more likely to support to more likely to oppose. Hierarch individualists don't seem to have budged!

I think this is pretty interesting (almost as interesting as learning that today's army has fewer horses and bayonets than it did in 1916, news the President announced in the middle of my writing this post; yet another shock!).

But I have to say, I myself am not so interested in the policy positions favored by people with opposing cultural outlooks.  One can't have a policy position -- on anything -- without making a judgment of value. Not surprisingly, people with different values tend to support different policies (although as I said, in this case the changing strength of the conflict on carbon emission limits did surprise me).

What's more interesting -- to me, at least! -- is the contribution that cultural values make to perceptions of risk and related facts. Cultural cognition is about how people's cultural outlooks shape processing of various types of information--from scientific findings to expert opinions to images captured in a video

What I wish we had collected data on last month and also back to 2006 was the relationship between our subjects' cultural outlooks and their perceptions of whether the death penalty deters. That's one of the classic examples of an empirical issue that's driven by symbolic or expressive, cultural outlooks. Indeed, every schoolboy & -girl knows  Lord, Ross & Lepper's classic biased assimilation study, which found that people conformed their assessments of studies on the deterrent effect of the death penalty to their pre-existing positions.

But there's really interesting evidence that people in general are becoming less convinced that the death penalty deters without changing their mind on the death penalty. Consider this from Gallup:

If you go back to '85 -- back when the death penalty was a big deal (remember Willie Horton? that was from the 1988 presidential campaign; the issue is dead as ... well, it's no longer relevant at all to national political divisions) & closer to when Lord, Ross & Lepper did their study, 62% believed the death penalty deterred, and only 31% that it didn't. Today the proportions are close to reversed. Yet support the death penalty has not tailed off nearly so dramatically.

What's going on? ... Maybe because the issue is less salient as a focus for cultural contestation (again, it's been off the national political state for a quarter century!), people don't feel the same pressure to conform their consequentialist rationales to their cultural evaluations of the death penalty; in other words, motivated consequentialism might be associated most strongly with culturally polarizing issues...

Just a conjecture! I really am perplexed!

And I like feeling that way.  Thanks Fagan Capital Punishment students for a really good question.

That's all for this episode of "WSMD? JA!"  See you next time!

Some references:

Ellsworth, P.C. & Gross, S.R. Hardening of the Attitudes: Americans’ Views on the Death Penalty. J. Soc. Issues 50, 19 (1994).

 

Saturday
Oct202012

Outline of position on (attitude about) how to improve policy-supportive science communication 

Had a conversation w/ a really smart scholarly friend who shares my basic orientation toward science communication & who is doing cool things to advance it. For his benefit, after we were done I reduced my thoughts to a small annotated outline. Figured I might as well put the memo up on the blog. It's the internet equivalent, I suppose, of a guy on a desert island putting a message in a bottle & tossing it into the ocean--the nice thing being that there are *so many* other islands out there on the net that the hope the bottle will end up washing onto the shore of someone who finds its contents useful is not nearly so farfetched or desperate!

0.  Polarization does not stem from a deficit in the public's comprehension of
     science 
(or the exploitation of any such deficit by self-interested actors)

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Misinformation and climate change conflict

1. On how to make sense of cultural cognition, science comprehension, and cultural
    polarization:

The problem isn’t the mode of comprehending science; it’s the contamination of the “science communication environment” in which cultural cognition (or like mechanisms) can be expected to & usually do reliably lead diverse, ordinary people to converge on best science. The contamination consists in the attachment of antagonistic cultural meanings to facts that admit of scientific investigation.

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Nullius in verba? Surely you are joking, Mr. Hooke! (or Why cultural cognition is not a bias, part 1) 

The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)

2. On what to do                                                                                                        

a. Protect science communication environment: We need to perfect the knowledge we have for forecasting potential contamination—on, say, novel issues like nanotechnology, synbio, or GMOs—and implement procedures (say, govt review of “science communication impact” of govt-funded science research & of regulatory decisionmaking) to use that knowledge to preempt such contamination.

The science of science communication: an environmental protection conception (Lecture at National Academy of Sciences Sackler Colloquium, May 22, 2012)

b.  Decontaminate already polluted environments: Hard to do but not impossible. Involves figuring out how through conscious reorientation of meaning cues—identity of advocates, narrative frames for conveying info, etc.—so that toxic associations get broken down.

Kahan D.M., Jenkins-Smith, J., Tarantola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92 (Jan. 9, 2012).

c.  Select policy/engagement locations in manner that exploits relative quality of scicom environments. The cues that determine what issues mean are highly sensitive to context, including what the policy question is, who is involved in the discussion, & where it is occurring. If one context is bad, then see if you can find another.

E.g., climate: The national-level “mitigation” discussion is highly polluted; the local, adaptation focused one is not.

The "local-adaptation science communication environment": the precarious opportunity

Go local, and bring empirical toolkit: Presentation to US Global Change Research Program

3. How to do it: scientifically

We have knowledge on these dynamics.  So just guessing what will work to promote constructive, nonpolarized public engagement with scientific information—without looking at & trying to make informed conjectures based on that knowledge—is a huge mistake (an ironic one, too, since it is an utterly unscientific way to do things).

An even bigger mistake is to do scicom w/o collecting information. Disciplined observation & measurement can be used to calibrate & improve knowledge-informed strategies as a communication effort (say, an attempt to build support for sensible use of climate science in an adaptation setting) unfolds. But just as important, the collection of information generated by these means is critical to extending practical knowledge of how to do effective communication in field settings. What’s learned every time people engage in scientifically informed science communication is more information that can be used to help improve the conducting of such activity in the future.

Thus, people who engage in policy-supportive science communication efforts w/o systematic information collection protocols – including ones that test effectiveness of their methods in promoting open-minded enagement—are casually dissipating & wasting a knowledge resource of tremendous value. They are in fact unwittingly aiding & abetting entropy--an act of treason in the Liberal Republic of Science!

Wild wild horses couldn't drag me away: four "principles" for science communication and policymaking 

Honest, constructive & ethically approved response template for science communication researchers replying to "what do I do?" inquiries from science communicators

 

Wednesday
Oct172012

Wanna see more data? Just ask! Episode 1: another helping of GM food

Okay, here's a new feature for the blog: "Wanna see more data? Just ask!"  

The way it works is that if anyone sees interesting data in one of my posts, or in any of our studies (assuming it was one I worked on; for others, I'll pass on requests but don't necessarily expect an answer; some of my colleagues have actual lives), and has some interesting question that could be addressed by additional analyses, that person can post a request (in comments section or by email to me) & I'll do the analyses and post the results.

Now notice I said the question has to be "interesting." Whether it meets that standard is something I'll decide, using personal judgement, etc. But here are some general, overlapping, related criteria:

1.  The request has to be motivated by some conjecture or question.  Basically, you have have some sort of theoretically grounded hypothesis in mind that can be tested by the analysis you'd like to see. The most obvious candidate would be a conjecture/question/hypothesis that's in the sprit of a plausible alternative explanation for whatever conclusion it was that I reached (or the study did) in presenting the data in the first place. But in any case, give some indication (can be brief; should be!) of what the question/hypothesis/conjecture that you are curious about is & why. 

2. Tell me how I can do the analysis and why doing it that way can be expected to generate some result that gives us more reason to accept a particular answer to the motivating question, or more rason to accept or reject the motivating hypothesis, than we would have had without the analysis.  The "how to do" part obviously will be constrained by what sorts of variables are in the dataset. Usually we have lots of demographic data as well as our cultural outlook variables and so forth. The "why" question requires specifying the nature of the causal inference that you think can be drawn from the analysis.  It's gotta make sense to be interesting.

3. No friggin' fishin trips! Don't ask me to correlate global warming with the price of cottage cheese just because you think that would be an amusing thing to do.

4. Don't even think of asking me to plug every conceivable variable into the right-hand side of a regression and see what sort of gibberish pops out. Of course, I'm happy to do multivariate analyses, but each variable has to be justified as part of a model that relates in a specifiable way to the interesting conjecture motivating the request and to the nature of the inference that can be drawn from the analysis. Or to put it another way, the analysis has to reflect a cogent modelling strategy. Overspecified regression analyses are usually a signature of the lack of a hypothesis -- people just see what turns out to be significant (something always will with enough variables) & then construct a post-hoc, just-so story for the result. In addition, the coefficients for overspecified models are often meaningless phantoms-- the impact of influences "holding constant" influences that in the real world are never "constant" in relation to those influences.... I'll write another post on why "over-controlling" is such a pernicious, mindless practice....

Okay. This first installment is responsive to questions posed in response to "part 3" of the GM food risk series. Disccusants there were curious about whether the "middling" mean score for the GM food risk item was best understood as "not sure; huh?," as I proposed, or as a genuine, mid-level of concern. One suggested seeing some more raw data might help, and on reflection I can think of some ways to look at them that might, at least a bit.

Consider these histograms, which reflect the distribution of responses to the 8-point industrial-strength risk perception item for "Global warming" (left) and "Genetically modified foods" (right):

Here are some things to note. First, GM food distribution is much more "normal" -- bell shaped -- than the global warming distribution. Indeed, if you compare the black line -- the statistical "normal density distribution" given the mean & SD for the global warming data --with the red one -- the kernel density plot, which "fits" a locally weighted regression to the data-- you can see that the distribution for global warming risk perceptions is closer to bimodal, meaning that the subjects are actually pretty divided between those who see "low risk" and those who see "high."  There's not so much division for GM foods.

Second, the GM foods distribution has a kind of a fat mid-point (low kurtosis). That's because a lot of survey respondents picked "3," "4," & "5." Because an excess of "middle choices" is a signature of "umm, not sure" for risk perception measures of this sort, I am now even more persuaded that the 800 members of this nationally representative sample didn't really have strong views about GM foods in relation to the other risks, all of which were ones that displayed substantial cultural polarization.

But my confidence in this conclusion is only modest.  The cases in which a middling mean signifies generalized "don't know" often have much more dramatic concentrations of responses toward the middle of the scale (high kurtosis); indeed, the labels that were assigned to each point on the likert item risk-perception measure were designed to mitigate the middle/don't-know effect, which is usually associated with scales that ask respondents to estimate a probability for some contingency (in which case people who don't know mean to convey that with "50%.").

Now consider these two figures:

These are the kernel density estimates for responses to these two risk-perception items when the sample is split at the mean of the "individualism-communitarianism" scale. Basically, the figures allow us to compare how "individualists" and "communitarians" are divied on global warming (left) and GM foods (right).

Do you see what I do? The individualists and communitarians are starkly divided on climate change: the latter is skewed strongly toward high risk, and the former toward low (although perhaps a bit less so; if I looked at "hierarch individualists," you'd really see skewing). That division (which, again, is compounded when the hierarchical disposition of the subjects is taken into account as well) is the source of the semi-bimodal distribution of responses to the global warming item. 

Now look at individualists & communitarians on GM foods. They see more or less eye-to-eye. This is corroboration of my conclusion in the last post that there isn't, at least not yet, any meaningful cultural division over GM foods. (BTW, the pictures would look the same if I had divided the subjects into "hierarchs" and "egalitarians"; I picked one of the two worldview dimensions for the sake of convenience and clarity).

Whaddya think? Wanna see some more? Just ask!

 Reference

de Bruin, W.B., Fischhoff, B., Millstein, S.G. & Halpern-Felsher, B.L. Verbal and Numerical Expressions of Probability: “It's a Fifty–Fifty Chance”. Organizational Behav. & Human Decision Processes 81, 115-131 (2000).

 

Monday
Oct152012

Timely resistance to pollution of the science communication environment: Genetically modified foods in the US, part 3

Okay: some data already!

As explained in parts one & two of this series, I’ve been interested in the intensifying campaign to raise the public profile of—and raise the state of public concern over—GM foods in the US.

That campaign, in my view, reflects a calculated effort to infuse the issue of GM-food risks with the same types of antagonistic meanings that have generated persistent states of cultural polarization on issues like climate change, nuclear power, the HPV vaccine, and gun control.  To me, that counts as pollution of the science communication environment, because it creates conditions that systematically disable the faculty that culturally diverse citizens use (ordinarily with great reliability) to figure out what is known to science.

But as I commented in the last post, the campaign has provoked articulate and spirited resistance from professional science communicators in the media. I view that as an extremely heartening development, because it furnishes us with what amounts to a model of how professional norms might contribute to protecting the science communication environment from toxic cultural meanings. Democratic societies need both scientific insight into how the science communication environment works and institutional mechanisms for protecting it if they are to make effective use of the immense knowledge at their disposal for advancing their citizens' common welfare.

But where exactly do things stand now in the US?  Historically, at least, the issue of GM-food risks has aroused much less attention, much less concern, than it has in Europe. That could change as a result of culturally partisan communications of the sort we are now observing, but has it changed yet or even started to?

John Timmer, the science editor for Ars Technica, actually posed more or less this question to me in a twitter exchange, asking whether there really is “anything like” the sort of cultural conflict toward GM foods risks that we see toward climate-change risks in this country. Questions like that deserve data-informed answers.

So here’s some data from a recent (end of September) survey. The sample was a nationally representative one of 800 individuals. One part of the survey asked them to rank on a scale of 0-7 “how serious” they viewed a diverse set of risks (I call this the “industrial strength risk perception measure”). 

The question, essentially, is whether GM foods are at risk of acquiring the sorts of cultural meanings that divide “hierarchical individualists” and “egalitarian communitarians” on various issues. Accordingly, I have constructed statistical models that permit us to see not only how GM-food risks rank in relation to others for the American population as a whole but also whether and strongly GM-food risks divide those two segments of the population.

 

There are a number of things one could say here.

One is—holy smokes, the US public is apparently more worried about GM-food risks than they are about global warming, nuclear power, and guns! The “average American” would assign a ranking of 4.3 to GM foods (just above “moderately risky”) but only 3.9 for global warming (just below), 4.0 (spot on) for nuclear, and 2.9 (between “low” and “moderate”) for guns.

But that wouldn’t be the way I’d read these results. First of all, while it’s true that GM foods are apparently more scary for the “average” American than guns, nuclear power, and climate change, the striking thing is just how unconcerned that “person” is with any of those risks. “High rates of taxation for businesses” are apparently much more worrisome for the "mean" member of the American population than the earth overheating or people being shot. Given how unconcerned this guy/gal is with all these other risks, should we get all that excited that he/she is a bit more more concerned about GM foods?

Notice too that the "mean" member of the population isn't as concerned with GM foods as with high business tax rates (4.5)—or as illegal immigration (4.7) or government spending (5.3)? What to make of that?...

But second and more important, look at the cultural variance on these risks.  Global warming turns out to be the most serious risk for egalitarian communitarians. Indeed, that group sees nuclear power as much riskier, too, than either business tax rates, illegal immigration, or “government spending,” which are about as scary for that group as gun risks.  Hierarchical individualists have diametrically opposed perceptions of the dangers posed by all of these particular risk sources.

Bear in mind, hierarch individualists and egalitarian communitarians aren’t rare, or unusual people. They are pretty recognizable in lots of respects—including their political affiliations, which amount to “Independent leans Republican” and “Independent leans Democrat,” respectively.

Given this, it’s not clear that it makes much sense to assign meaning to the “average” or “population mean” scores on these risks. Because real people have particular rather than "mean" cultural outlooks, we should ask not how the "average" person perceives culturally contested risks, but how someone like this see those risks as opposed to someone like that?

Yet note, the risks posed by GM foods are not culturally contested. We are all, in effect, "average" there.  Moreover, for both cultural hierarchical individualists and egalitarian communitarians, GM-food risks are in the “middle” of the range of risk sources they evaluated.

So what I’d say, first, is that there is definitely no cultural conflict for GM foods in the US—at least not of the sort that we see for climate change, nuclear power, guns, etc.

Second, I’d say that I don’t think there’s very much concern about GM foods generally. The “middling” score likely just means that members of the sample didn’t feel nearly as strongly about GM foods as they felt—one way or the other—about the other risks. So they assigned a middling rating.

But third, and most important, I’d say that this is exactly the time to be worried about cultural polarization over GM foods.

As I said at the outset of this series, putative risk sources aren’t borne with antagonistic cultural meanings. They acquire them.

But once they have them, they are very very very hard to get rid of. 

In both parts, I likened culturally antagonistic meanings to “pollution” of the “science communication environment.”  Given how hard it is to change cultural meanings, it’s got to be a lot easier and more effective to keep that sort of contamination out—to deflect antagonistic meanings away from novel technologies or ones that otherwise haven’t acquired such resonances—than it is to “clean it up” once an issue has become statured with such meanings.

Consider the debate over climate change, which is highly resistant to simple “reframings” strategies. Perhaps it would have worked to have put Nancy Pelosi and Newt Gingrich on a couch together before 2006. But today, the simple recommendation “use ideologically diverse messengers!” is not particularly helpful.

So I believe the data-informed answer to John Timmer's question is, no, GM foods don't provoke anything like the sort of antagonistic meanings that climate change expresses.

And for that reason, I'd argue, the efforts of reflective science journalists and others to resist the release of such contaminants into the science communication environment is as timely as it is commendable.

Part one in this series.

Part two.

Sunday
Oct142012

Resisting (watching) pollution of the science communication environment in real time: genetically modified foods in the US, part 2

Just as the health of individual human beings depends on the quality of natural environment, the well-being of a democratic society depends on the quality of the science communication environment.

The science communication environment is the sum total of cues, influences, and processes that ordinary members of the public rely on to participate in the collective knowledge society enjoys by virtue of science.

No one (not even scientists) can personally comprehend nearly as much of what is known to science as it makes sense for them—as consumers, as health-care recipients, as democratic citizens—to accept as known by science. To participate in that knowledge, then, they must accurately identify who knows what about what.

When the science communication environment is in good working order, even people who have only rudimentary understandings of science will be able to make judgments of that kind with remarkable accuracy. When it is not, even citizens with high levels of scientific knowledge will be disabled from reliably identifying who knows what about what, and will thus form conflicting perceptions of what is known by science—to their individual and collective detriment.

Among the most toxic threats to the quality of a society’s science communication environment are antagonistic cultural meanings: emotional resonances that become attached to risks or other policy-relevant facts and that selectively affirm and denigrate the commitments of opposing cultural groups.

Ordinary individuals are accustomed to exercising the faculties required to determine who knows what about what within such groups, whose members, by virtue of their common outlooks and experiences, interact comfortably with one another and share information without misunderstanding or conflict. Because antagonistic cultural meanings create strong psychic pressures for members of opposing groups to form and persist in conflicting sets of factual beliefs, such resonances enfeeble the reliable functioning of the faculties ordinary people (including highly science literate ones) use to participate in what is known by science.

Antagonistic cultural meanings are thus a form of pollution in the science communication environment. Their propagation accounts for myriad divisive and counterproductive policy conflicts—including ones over climate change, nuclear power, and private gun ownership.

In part one of this series, I described the complex of economic and political forces that have infused the issue of genetically modified (GM) foods with culturally antagonistic meanings in Europe.

I also noted the signs, including the campaign behind the pending GM food-labeling referendum in California, that suggest the potential spread of this contaminant to the US science-communication environment.

What makes the campaign a pollutant in this regard has nothing to do with whether GM foods are in fact a health hazard (there’s a wealth of scientific data on that; readers who are interested in them should check out David Tribe’s blog). Rather, it has to do with the deliberate use of narrative-framing devices—stock characters, dramatic themes, allusions to already familiar conflicts, and the like—calculated to tap into exactly the culturally inflected resonances that pervade climate change, nuclear power, guns, and various other issues that polarize more egalitarian and communitarian citizens, on the one hand, and more hierarchical and individualistic ones, on the other.

But as I adverted to, there is at least one countervailing influence that didn’t exist in Europe before it became a site of political controversy over GM foods but that does exist today in the US: consciousness of the way in which dynamics such as these can distort constructive democratic engagement with valid science, and a strong degree of resolve on the part of many science communicators to counteract them.

Science commentators like Keith Kloor and David Ropeik, e.g., have conspicuously criticized the propagation of what they view as unsubstantiated claims about GM Food health risks.

Both of these writers have been outspoken in criticizing ungrounded attacks on the validity of science on climate change, too. Indeed, Kloor recently blasted GM food opponents as the “climate skeptics of the Left.

Precisely because they have conspicuously criticized distortions of science aimed at discounting environmental risks in the past, their denunciation of those whom they see as distorting science to exaggerate environmental risks here reduces the likelihood that GM foods risks will become culturally branded.

Science journalists, too, have been quick to respond to what they see as the complicity of their own in participating in dissemination of questionable science claims on GM foods.

In one still-simmering controversy, a large number of journalists accepted an offer of advance access to an alarming study on GM-food risks in return for refraining from seeking the opinion of other scientists before publishing their “scoop” stories. Timed for release in conjunction with a popular book and a TV documentary, the study, conducted by a scientists with a high profile as supporter of GM-food regulation, was in fact thereafter dismissed as non-credible by expert toxicologists—although not before the alarming headlines were seized on by proponents of the California labeling proposition as well as European regulators.

Writing about the controversy, NY Times writer Carl Zimmer blasted the affair as a “rancid, corrupt way to report about science.” It was clear to the participating reporters, Zimmer observed, that the authors of the study were seeking to exclude any critical appraisal from the initial burst of attention” in the media, thereby “reinforcing opposition to genetically modified foods.” “We need to live up to our principles, and we need to do a better job of calling out bad behavior…. [Y]ou all agreed to do bad journalism, just to get your hands on a paper. For shame.”

Ars Technica editor John Timmer amplified Zimmer’s response. “Very little of the public gets their information directly from scientists or the publications they write,” Timmer pointed out. “Instead, most of us rely on accounts in the media, which means reporters play a key role in highlighting and filtering science for the public.” In this case, Timmer objected, “the press wasn't reporting about science at all. It was simply being used as a tool for political ends.”

One reason to be impressed by these sorts of reactions to GM foods is that they suggest the possibility of using professional norms as a more general device for protecting the quality of the science communication environment.

As I indicated in my last post, there is nothing inevitable about the process by which a risk issue becomes suffused with antagonistic cultural meanings.  Those kinds of toxic associations are made, not born.

It follows that we should make protection of the science communication environment a matter of self-conscious study and self-conscious action.  The natural environment cannot be expected to protect itself from pollution itself without scientifically informed action on our part. And the same goes for the quality of the science communication environment.

I’m of the view that the sorts of collective action that protection of the science communication environment requires will have to come from various sources, including government, universities, and NGOs.

But clearly one of the sources will have to be professional science communicators. Timmer is clearly right about the critical role it plays—not just in translating what’s known by science into terms that enable curious people to experience the thrill of sharing in the wondrous insights acquired through our collective intelligence (I myself am so so grateful to them for that!), but in certifying who knows what about what so that as democratic citizens people can reliably gain access to the knowledge they need to contribute to intelligent collective decisionmaking.

Animated by diverse motivations—commercial and ideological—actors intent on disabling the faculty culturally diverse citizens use to discern who knows what about what can thus be expected to strategically target the media. Strong professional norms are a kind of warning system that can help science journalists recognize and repel efforts to use them as instruments for polluting the science communication environment.

Unlike centrally administered rules or codes, norms operate as informal, spontaneous guides for collective behavior. They get their force from internalized emotional commitments both to abide by shared standards of conduct and to contribute to enforcement of them by censure and condemnation of violators. Norms are propogated as members of a community observe examples of behavior that express those commitments and see others responding with admiration and reciprocation. That all seems to be happening here. 

This unusual opportunity to watch an attempt to inject a new toxic meaning into the science communication environment also furnishes a unique opportunity to learn something about who can protect that environment from pollution and how.

Oh!  I said I would share some data on cultural perceptions of GM food risks in the US in this installment of the series. But don’t you agree that I’ve already gone on more than long enough? So I’ll just have to present the data next time—in the third, and I promise final, post in this (now) 3-part (I actually imagined only one when I started) series.  (But here's a sneak preview.)

Part one in this series.

Part three.

Friday
Oct122012

Watching (resisting) pollution of the science communication environment in real time: genetically modified foods in the US, part 1

Putative risk sources are not born with culturally divisive meanings. They acquire them.

Something happens—as a result perhaps of strategic manipulation but also possibly of accident or misadventure—that imbues some technology with resonances that selectively affirm and denigrate the outlooks of opposing groups. At that point, how to regard the risks the technology poses—not only what to do to ameliorate them, but whether even to acknowledge them as real—becomes a predicable occasion for disagreement between the groups’ respective members.

By highlighting the association between competing positions and competing cultural identities, such disagreement itself sharpens the antagonistic meanings that the technology bears—thereby magnifying its potential to generate conflict. And so forth & so on.

But the thing that imbues a technology (or a form of behavior or a type of policy) with culturally antagonistic meanings doesn’t have to happen. There’s nothing necessary or inevitable about this process.

It’s this contingency that explains why one putative risk source (say, the HPV vaccine) can provoke intense cultural conflict while another, seemingly equivalent source (say, the H1N1 vaccine) doesn’t. It explains, too, why one and the same risk (nuclear power, e.g., or “mad cow disease”) can provoke division in one society but indifference in another, seemingly similar society.

Consider genetically modified (GM) foods. Historically, GM foods—e.g., soybeans altered to resist diseases that otherwise would be controlled by chemical pesticides, or potatoes engineered to withstand early frosts—have not provoked nearly as much concern in the US as in Europe.  Such products can be found in upwards of 70% of US manufactured foods.  In Europe, the figure is supposedly closer to 5%, due to enactment of progressively stricter regulations over the last decade and a half.

But it’s certainly possible that something could happen to make US public attitudes and regulations move in the direction of Europe’s. Indeed, it could be happening.

There is now a concerted effort underway to raise the risk profile of GM foods. The most conspicuous manifestation of it is a California ballot proposition to mandate that all foodstuffs containing GM foods bear a label that advises (warns) consumers of this fact.

The proposition is supported by organic food producers and sellers, who funded the effort to get the initiative on the ballot and are now funding the campaign to secure its approval, as well as by certain environmental groups, which are playing a conspicuous public advocacy role.

A label is not a ban. But it can definitely be a precursor to something more restrictive.

Consumers logically infer from “advisory” labels that there is a reason for them to be concerned.

Psychologically they tend to greatly overreact to any information about chemical risks—including information that tries to prevent them from overreacting by characterizing the risks in question as small or uncertain.  It’s thus in the nature of modest, informational regulations to breed concerns capable of supporting stronger, substantive regulation.

Dynamics of cultural cognition, moreover, can fuel this sort of escalation. If the source of initial concern is transmitted in a manner that conveys antagonistic resonances, then the resulting division of opinion among members of different groups can feed on itself.

The movement to promote concern with GM foods seems ripe with antagonistic meanings of this sort. The information being disseminated to promote attention to the risks of GM foods in general, and to promote support for the California initiative in particular, are suffused with cues (stock characters, distinctive idioms, links to already familiar sources of conflict such as nuclear power & climate change) that are likely to resonate with those who harbor an egalitarian-communitarian cultural style and antagonize those with a more a hierarchical and individualistic one.

This framing of the issue could thus end up pitting members of these two groups—already at odds over climate change, nuclear power, gun control, and various other risks—against one another.

In that case, the US will have arrived at a state of cultural conflict over GM foods that seems to be the same one European nations followed. There, small, local farmers took the lead in proclaiming the health risks of GM food products, which were being supplied by their larger, transnational industrial-farm rivals.

Egalitarian environmental activists enthusiastically assimilated this charge into their broader indictment of industry for putting profits ahead of human welfare. Among the ironies here was the impetus such political activity imparted to blocking the production of so-called “golden rice,” a nutritionally enhanced GM grain that public health advocates hailed for the contribution it could make to combating afflictions (including preventable blindness) in malnourished children in the developing world.

I don’t know or even have a particularly strong intuition on what risks GM foods pose.

But I do have a very strong opinion that a state of cultural polarization over GM food risks would be a bad thing. As myriad controversies--from the nuclear power debate of the 1980s to the climate change debate of today--have made clear, when risk issues become infused with antagonistic cultural meanings, democratic societies are less likely to enact policies that reflects the best scientific evidence, whatever it might be.

Okay. Enough for now. In the next in this two part series, I will identify one important countervailing influence that didn’t exist in Europe before it became a site of conflict over GM food risks but that does exist now. I’ll also report some data that bears on the current degree of cultural polarization that exists in the US over the risks that GM foods present.

Parts two & three in this series.

References

Anderson, K., Damania, Richard and Jackson, Lee Ann,  (September 2004). World Bank Policy Research Working Paper No. 3395. Available at SSRN: http://ssrn.com/abstract=625272 in World Bank Policy Research Working Paper No. 3395. (ed. W. Bank)2004).

Ferrari, M. Risk perception, culture, and legal change : a comparative study on food safety in the wake of the mad cow crisis. (Ashgate Pub., Farnham, Surrey, England ; Burlington, VT; 2009).

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D.M. Gentle Nudges vs. Hard Shoves: Solving the Sticky Norms Problem. U. Chi. L. Rev. 67, 607-45 (2000).

Kurzer, P. & Cooper, A. What's for Dinner? Comparative Political Studies 40, 1035-1058 (2007).

Slovic, P., Flynn, J., Mertz, C.K., Poumadere, M. & Mays, C. in Cross-Cultural Risk Perception: A Survey of Empirical Studies. (eds. O. Renn & B. Rohrmann) 55-102 (Kluwer Academic, Dordrecht, The Netherlands; 2000).

Sunstein, C.R. Laws of Fear: Beyond the Precautionary Principle. (Cambridge University Press, Cambridge, UK ; New York; 2005).

Monday
Oct082012

More R^2 envy!

To add to your "it's-not-how-big-but-what-you-do-with-itR2 file, this from Andrew Gelman: