follow CCP

Recent blog entries
Monday
Sep092013

The quality of the science communication environment and the vitality of reason

The Motivated Numeracy and Enlightened Self-Government working paper has apparently landed in the middle of an odd, ill-formed debate over the "knowledge deficit theory" and its relevance to climate-science communication. I'm not sure, actually, what that debate is about or who is involved.  But I do know that any discussion framed around the question "Is the knowledge-deficit theory valid?" is too simple to generate insight. There are indeed serious, formidable contending accounts of the the nature of the "science communication problem"--the failure of citizens to converge on the best available evidence on the dangers they face and the efficacy of measures to abate them.  The antagonists in any "knowledge-deficit debate" will at best be stick-figure representations of these positions. 

Below is an excerpt from the concluding sections of the MNESG paper. It reflects how I see the study findings as contributing to the position I find most compelling in the scholarly discussion most meaningfully engaged with the science communication problem. The excerpt can't by itself supply a full account of the nature of the contending positions and the evidence on which they rest (none is wholly without support). But for those who are motivated to engage the genuine and genuinely difficult questions involved, the excerpt might help to identify for them paths of investigation that will lead them to locations much more edifying than the ones in which the issue of "whether the knowledge deficit theory is valid" is thought to be a matter worthy of discussion.

5.2. Ideologically motivated cognition and dual process reasoning generally

The ICT hypothesis corroborated by the experiment in this paper conceptualizes Numeracy as a disposition to engage in deliberate, effortful System 2 reasoning as applied to quantitative information. The results of the experiment thus help to deepen insight into the ongoing exploration of how ideologically motivated reasoning interacts with System 2 information processing generally.

As suggested, dual process reasoning theories typically posit two forms of information processing: a “fast, associative” one “based on low-effort heuristics”, and a “slow, rule based” one that relies on “high-effort systematic reasoning” (Chaiken & Trope 1999, p. ix). Some researchers have assumed (not unreasonably) that ideologically motivated cognition—the tendency selectively to credit or discredit information in patterns that gratify one’s political or cultural predispositions—reflects over-reliance on the heuristic forms of information processing associated with heuristic-driven, System 1 style of information processing (e.g., Lodge & Taber 2013; Marx et al. 2007; Westen, Blagov, Harenski, Kilts, & Hamann, 2006; Weber & Stern 2011; Sunstein 2006).

There is mounting evidence that this assumption is incorrect. It includes observational studies that demonstrate that science literacy, numeracy, and education (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Hamilton 2012; Hamilton 2011)—all of which it is plausible to see as elements or outgrowths of the critical reasoning capacities associated with System 2 information processing—are associated with more, not less, political division of the kind one would expect if individuals were engaged in motivated reasoning.

Experimental evidence points in the same direction. Individuals who score higher on the Cognitive Reflection Test, for example, have shown an even stronger tendency than ones who score lower to credit evidence selectively in patterns that affirm their political outlooks (Kahan 2013). The evidence being assessed in that study was nonquantitative but involved a degree of complexity that was likely to obscure its ideological implications from subjects inclined to engage the information in a casual or heuristic fashion. The greater polarization of subjects who scored highest on the CRT was consistent with the inference that individuals more disposed to engage systematically with information would be more likely to discern the political significance of it and would use their critical reasoning capacities selectively to affirm or reject it conditional on its congeniality to their political outlooks.

The experimental results we report in this paper display the same interaction between motivated cognition and System 2 information processing. Numeracy predicts how likely individuals are to resort to more systematic as opposed to heuristic engagement with quantitative information essential to valid causal inference. The results in the gun-ban conditions suggest that high Numeracy subjects made use of this System 2 reasoning capacity selectively in a pattern consistent their motivation to form a politically congenial interpretation of the results of the gun-ban experiment.  This outcome is consistent with that of scholars who see both systematic (or System 2) and heuristic (System 1) reasoning as vulnerable to motivated cognition (Cohen 2003; Giner-Sorolla & Chaiken 1997;  Chen, Duckworth & Chaiken 1999).

These findings also bear on whether ideologically motivated cognition is usefully described as a manifestation of “bounded rationality.” Cognitive biases associated with System 1 reasoning are typically characterized that way on the ground that they result from over-reliance on heuristic patterns of information processing that reflect generally adaptive but still demonstrably inferior substitutes for the more effortful and more reliable type of information processing associated with System 2 reasoning (e.g., Kahneman 2003; Jolls, Sunstein & Thaler 1998).

We submit that a form of information processing cannot reliably be identified as “irrational,” “subrational,” “boundedly rational” or the like independent of what an individuals’ aims are in making use of information. It is perfectly rational, from an individual-welfare perspective, for individuals to engage decision-relevant science in a manner that promotes culturally or politically congenial beliefs. Making a mistake about the best-available evidence on an issue like climate change, nuclear waste disposal, or gun control will not increase the risk an ordinary member of the public faces, while forming a belief at odds with the one that predominates on it within important affinity groups of which they are members could expose him or her to an array of highly unpleasant consequences (Kahan 2012). Forms of information processing that reliably promote the stake individuals have in conveying their commitment to identity-defining groups can thus be viewed as manifesting what Anderson (1993) and others (Cohen 2003; Akerlof and Kranton 2000; Hillman 2010; Lessig 1995) have described as expressive rationality.

If ideologically motivated reasoning is expressively rational, then we should expect those individuals who display the highest reasoning capacities to be the ones most powerfully impelled to engage in it (Kahan et al. 2012). This study now joins the rank of a growing list of others that fit this expectation and that thus supports the interpretation that ideologically motivated reasoning is not a form of bounded rationality but instead a sign of how it becomes rational for otherwise intelligent people to use their critical faculties when they find themselves in the unenviable situation of having to choose between crediting the best available evidence or simply being who they are.

6. Conclusion: Protecting the “science-communication environment”

To conclude that ideologically motivated reasoning is expressively rational obviously does not imply that it is socially or morally desirable (Lessig 1995). Indeed, the implicit conflation of individual rationality and collective wellbeing has long been recognized to be a recipe for confusion, one that not only distorts inquiry into the mechanisms of individual decisionmaking but also impedes the identification of social institutions that remove any conflict between those mechanisms and attainment of the public good (Olson 1965). Accounts that misunderstand the expressive rationality of ideologically motivated cognition are unlikely to generate reliable insights into strategies for counteracting the particular threat that persistent political conflict over decision-relevant science poses to enlightened democratic policymaking.

Commentators who subscribe to what we have called the Science Comprehension Thesis typically propose one of two courses of action. The first is to strengthen science education and the teaching of critical reasoning skills, in order better to equip the public for the cognitive demands of democratic citizenship in a society where technological risk is becoming an increasingly important focus of public policymaking (Miller & Pardo 2000). The second is to dramatically shrink the scope of the public’s role in government by transferring responsibility for risk regulation and other forms of science-informed policymaking to politically insulated expert regulators (Breyer 1993). This is the program advocated by commentators who believe that the public’s overreliance on heuristic-driven forms of reasoning is too elemental to human psychology be corrected by any form of education (Sunstein 2005).

Because it rejects the empirical premise of the Science Comprehension Thesis, the Identity-protective Cognition Thesis takes issue with both of these prescriptions. The reason that citizens remain divided over risks in the face of compelling and widely accessible scientific evidence, this account suggest, is not that that they are insufficiently rational; it is that the that they are too rational in extracting from information on these issues the evidence that matters most for them in their everyday lives. In an environment in which positions on particular policy-relevant facts become widely understood as symbols of individuals’ membership in and loyalty to opposing cultural groups, it will promote people’s individual interests to attend to evidence about those facts in a manner that reliably conforms their beliefs to the ones that predominate in the groups they are members of. Indeed, the tendency to process information in this fashion will be strongest among individuals who display the reasoning capacities most strongly associated with science comprehension.

Thus, improving public understanding of science and propagating critical reasoning skills—while immensely important, both intrinsically and practically (Dewey 1910)—cannot be expected to dissipate persistent public conflict over decision-relevant science. Only removing the source of the motivation to process scientific evidence in an identity-protective fashion can. The conditions that generate symbolic associations between positions on risk and like facts, on the one hand, and cultural identities, on the other, must be neutralized in order to assure that citizens make use of their capacity for science comprehension.[1]

In a deliberative environment protected from the entanglement of cultural meanings and policy-relevant facts, moreover, there is little reason to assume that ordinary citizens will be unable to make an intelligent contribution to public policymaking. The amount of decision-relevant science that individuals reliably make use of in their everyday lives far exceeds what any of them (even scientists, particularly when acting outside of the domain of their particular specialty) are capable of understanding on an expert level. They are able to accomplish this feat because they are experts at something else: identifying who knows what about what (Keil 2010), a form of rational processing of information that features consulting others whose basic outlooks individuals share and whose knowledge and insights they can therefore reliably gauge (Kahan, Braman, Cohen, Gastil & Slovic 2010).

These normal and normally reliable processes of knowledge transmission break down when risk or like facts are transformed (whether through strategic calculation or misadventure and accident) into divisive symbols of cultural identity. The solution to this problem is not—or certainly not necessarily!—to divest citizens of the power to contribute to the formation of public policy. It is to adopt measures that effectively shield decision-relevant science from the influences that generate this reason-disabling state (Kahan et al. 2006).

Just as individual well-being depends on the quality of the natural environment, so the collective welfare of democracy depends on the quality of a science communication environment hospitable to the exercise of the ordinarily reliable reasoning faculties that ordinary citizens use to discern what is collectively known. Identifying strategies for protecting the science communication environment from antagonistic cultural meanings—and for decontaminating it when such protective measures fail—is the most critical contribution that decision science can make to the practice of democratic government.


[1] We would add, however, that we do not believe that the results of this or any other study we know of rule out the existence of cognitive dispositions that do effectively mitigate the tendency to display ideologically motivated reasoning. Research on the existence of such dispositions is ongoing and important (Baron 1995; Lavine, Johnston & Steenbergen, 2012). Existing research, however, suggests that the incidence of any such disposition in the general population is small and is distinct from the forms of critical reasoning disposition—ones associated with constructs such as science literacy, cognitive reflection, and numeracy—that are otherwise indispensable to science comprehension. In addition, we submit that the best current understanding of the study of science communication indicates that the low incidence of this capacity, if it exists, is not the source of persistent conflict over decision-relevant science. Individuals endowed with perfectly ordinary capacities for comprehending science can be expected reliably to use them to identify the best available scientific evidence so long as risks and like policy-relevant facts are shielded from antagonistic cultural meanings.

Wednesday
Sep042013

Motivated Numeracy (new paper)!

Here's a new paper. I'll probably blog about it soon, but if you'd like to comment on it now, please do!

 

Tuesday
Sep032013

The NRA's "expressive-rope-a-dope-trick"


The NRA gets science communication.

In fact, it understands something that many groups that at least purport to be committed to promoting constructive public engagement with the best available scientific evidence don’t.

Of course, it uses what it understands for a purpose very distinct from promoting such engagement. Indeed, it uses its knowledge about how diverse, ordinary people ordinarily come to know what they know about decision-relevant science in a manner that effectively impedes their convergence on evidence essential to their common welfare.

This makes the NRA a truly evil entity—a kind of syndicalist element subversive of the Constitution of the Liberal Republic of Science

But one can still actually learn something from seeing what it knows and what it does.

The point the NRA gets—and that many other groups that I think have admirable aims don’t, and that makes them tend to do a bad job—is that effective communication of decision-relevant science depends on the quality of the science communication environment.

The science communication environment is the sum total of cues, influences, and process that enable people to recognize as known by science so many more things than they could possibly form a meaningful understanding of for themselves. The number of things that fit into that category is immense—from the contribution that antibiotics make to treating diseases to the validity of modern telecommunications technologies they rely on to transmit data, from the reliability of their vehicle’s GPS systems to the public health benefits of pasteurization of raw milk, from the nontoxicity of pressed wood products manufactured subject to state and federal formaldehyde limits to the nutritional value of food products (massive amounts of them in the US) that are prepared with GM technology.

One of the most vital constituents of the science communication environment is the existence of authoritative networks of certification.

I’m talking, really, just about the role that the utterly ordinary, every-day communities individuals inhabits—the ones that comprise their neighbors, their friends, their trusted coworkers, and myriad professions they rely on, from doctors to auto mechanics to accountants to insurance adjusters.

These communities are flush with reliable, valuable guidance that individuals can use to determine what’s known to science.  Of course, they are also coursing with bogus information too—unsupported and unsupportable claims about the dangers of everyday products (“watch out—cell phone radiation causes brain tumors!”) and absurd claims about health remedies (“ach—don’t do chemotherapy for your breast cancer; yoga will do the trick!”)

People sort out one from the other—again, not because they are experts on the claims that are being made what science knows, but because they are experts at something else: figuring out who actually knows what they are talking about, and can be relied upon to transmit the best available evidence in a reliable and accurate manner.

This is the key to understanding why the transmission of knowledge tends to have a culturally insular quality to it.

The communities of certification people tend to resort to orient themselves appropriately with respect to decision-relevant science are ones made up of people who share basic outlooks on the good life.  People enjoy spending time with people like that and tend to form important projects with them. They can read those people more easily—and distinguish the genuinely knowledgeable from the bullshitters among them more readily—than they can when they are engaging people whose cultural orientation is very different from their own.

We live in a society that tolerates and celebrates cultural diversity (a fact that is actually essential to the progress of scientific discovery), and therefore the number of communities people rely on to perform this certification function is large.

But that’s generally not a problem.  These communities are all in touch with what science knows.  They all generally lead their members to the same conclusions.

Indeed, if there was a community that consistently misled its members on what science knows, the members of that group, given how important decision-relevant science is to their own well-being, wouldn’t last very long.

Nevertheless, every once in a while a risk or other policy-relevant fact becomes engaged in antagonistic cultural meanings that convert positions on it, in effect, into badges of membership in and loyal to opposing cultural groups. 

When that happens, members of diverse cultural groups won’t converge on the best available evidence.  Instead—using the very same normal, and normally reliable cues to ascertain what’s known to science—they will polarize.

The stake that any ordinary person has in protecting the status of, and his or her standing in, one of these groups tends to exceed the significance of the stake that person has, as an individual, in forming scientifically informed personal beliefs. As a result, individuals, in this circumstance, will predictably engage information in a manner more reliably geared to forming beliefs that match the ones the position identified with their group than the ones most supported by the best available scientific evidence.  

Indeed, in these circumstances, individuals endowed with the capacities and dispositions most strongly associated with science comprehension will use these abilities in an opportunistic fashion to serve the goal have to conform the evidence the encounter or actively seek out to the position that is predominant in their cultural group.

These antagonistic meanings can be likened to a form of pollution in the science communication environment.  Their existence disables the faculties that ordinary members of the public use to recognize what science knows. 

That’s what the NRA knows.  That’s the insight into the science of science communication that it ruthlessly exploits—not to promote convergence on the best available evidence but to cultivate a state of persistent, knowledge-disabling antagonism.

The NRA is in the business of science miscommunication.  And its most potent weapon is not the dissemination of studies that purport to show that crime rates go down when people are allowed to carry concealed handguns. 

It’s the steady stream of pollution that it emits into the science communication environment through actions calculated to sustain and invigorate the culturally antagonistic meanings that surround guns in American society.

Really, the NRA is an ingenious science communication environment polluter.

It’s most creative, successful, and insidious technique involves what I will call the “expressive-rope-a-dope” maneuver.

This trick involves proposing a law that in fact has zero behavioral consequence but that is bristling with cultural meanings that one can expect to antagonize another cultural groups.  The effect is achieved, though, not by antagonizing the other group (I suppose the NRA or some other group using this tactic might take pleasure in that) but by provoking the opposing group into denouncing the law in terms that are similarly suffused with culturally assaultive language.

The result of the violent collision of these meanings is a mushroom cloud of toxic, culturally partisan recrimination that blankets the public in the radiation of identity threat.  Whatever science content is being transmitted by anyone’s messages is drowned out but the much clearer, much more intense, much more consequential signal that the positions at stake here are ones that are symbols of membership your group; deviate from that position at your peril!

Consider two examples of the NRA using this trick.

The first involved its campaign to push for adoption of “stand your ground” self-defense laws.  These laws state that a person needn’t retreat before using deadly force to repel a threat of death or great bodily harm.

From the beginning, the enactment of these laws has drawn high profile, incensed denunciations of “wild west,” “shoot first,” “vigilante justice”—along with completely untenable, absurd claims about how this “sharp turn in American law” increased homicide rates.

The simple truth is that these laws were not a departure, radical or otherwise, from existing law. The right to “stand one’s ground” had been the majority rule in the U.S. for over a century, and was already on the books in most of the states that adopted them!

The absurdity of media reports blaming “relaxation” of self-defense standards for increased homicides was comically inflated by the incompetence of publicity-hungry scholars pedaling econometric models purporting to quantify how much “reducing the legal price” for homicide in states that never changed their law increased the “return” on resorting to deadly violence!

The aim of getting states to enact them wasn’t to create a legal safe haven for individuals who forgo a physical one in lieu of blowing away a deadly attacker—a scenario that one is hard-pressed to find instances of except in lawschool hypotheticals.

Rather, as I’ve discussed previously, the effort was a calculated strategy to reactivate the focus of a long dormant, largely sectional conflict between proponents of opposing cultural styles—one stressing values such as individual honor and self-reliance and the other the democratic ideal of reasoned, nonviolent resolution of conflict and the duty of universal concern, on the other—who saw the contest over enactment of these laws as symbolic contest between their competing visions.

Mission accomplished for the NRA, which has parleyed the recurring attacks on “stand your ground laws”—the most recent in connection with the Trayvon Martin case, in which that law played no role in the defense theory—into a sense of indignation and defiant pride on the part of those who recognize in the tone and idiom of the critiques contempt for their identities.   

The second involves legislation now pending in Missouri that would make it a crime for federal agents to enforce federal gun legislation in the state. The NRA is not playing an open role in backing the legislation, but it frequently orchestrates symbolic legislation of this sort behind the scences. Predictably, the law has provoked a ear-splitting clang of alarm bells from NRA critics in the national media warning that the legislation, if passed, will become a model for “nullification” of federal gun laws across the Nation. 

They should save their breath.  Such laws are a dead letter under the Supremacy Clause of the U.S. Constitution.  There is zero likelihood that any state prosecutor would even try to enforce one, much less that a federal court (to which any such prosecution would be subject to “removal” or transfer under federal law) would uphold its constitutionality.

But of course, the contrived panic is music to the NRA’s ears.  It supplies them with even more vivid and dramatic materials with which to feed the sense of cultural encirclement that drive those whose identities are promiscously assaulted by gun-control advocates to donate money to the organization. 

The biggest threat to the NRA isn’t gun legislation. It is apathy.

Gun ownership is the strongest predictor (not surprisingly) of resistance to gun control legislation.  Over time, the percentage of Americans owning guns as declined.

Halting that trend, the NRA recognizes, depends on sustaining the vitality of the cultural meanings that have always made guns so popular with a large segment of the American public.

The surest way to do that is to manufacture dramatic instances of expressive conflict over guns, thereby reinvigorating opposition to gun control as a symbol of cultural identity and bombarding the communities in which that cultural style is prevalent with the signal that having a strong position against regulation of guns continues to be something that those whom they interact in their daily lives will use to judge their character.

But there is in fact a way effectively to oppose this strategy.

The expressive-rope-a-dope maneuver requires a dope—a loud, aggressive, ill-informed opposition that doesn’t get that the laws its attacking are purely expressive, or that the contribution those laws make to maintaining the gun as a symbol of identity depends on attacking them in culturally assaultive way.

Don't do that. Don't take the bait. Don't give the NRA what it wants by pretending symbolic gestures have real and dire consequences and then making opposition to them the occassion for amplifying the signal of cultural hostility that fills otherwise ordinary citizens with resentment and fury.

There’s no meaningful political theater if only half the cast shows up.

Indeed, this is something that lots of groups that are committed to promoting constructive engagement with decision-relevant science could benefit from learning.  The NRA isn't the only group that knows how to rope dopes.

This assumes, of course, that the groups getting roped really want to protect the quality of the science communication environment from culturally partisan meanings.

Some of them likely value the chance to engage risk issues in a manner that fills the science communication environment with culturally partisan meanings.

If so, then they aren't being dopes when they snap at the bait and make their own contribution to the toxic fog of cultural recrimination surrounding the American gun question or other issues that feature persistent polarization over decision-relevant science.

In that case, they are being tapeworms of cognitive illiberalism, just like the NRA.

 

Thursday
Aug292013

Science and the craft norms of science journalism, Part 2: Making craft norms evidence based

This is the second in a series that will be between 3 and 14,321 posts on the connection between science and the craft norms of science journalism.

The point of the series, actually, is that there isn’t—ironically—the sort of connection there should be.

I myself revere science journalists. To me, they perform a kind of magic, making it possible for me, as someone of ordinary science intelligence to catch a glimpse of, and be filled with the genuine wonder and awe inspired by, seeing what we have come to know about the workings of the universe by use of science.

This isn’t really magic, of course, because there’s no such thing as magic, and it would insult anyone who accepts science’s way of knowing as the best—the only valid—way of knowing to say that what he or she is doing amounts to “magic” if the person saying this weren’t being ironic or whimsical (I could imagine describing something as “magic” in a tone of rebuke or contempt: e.g., “Freudian psychoanalysis is a form of magic.”).

But what science journalists do is amazing and hard to fathom. They perform an astonishing task of translation, achieving a practical, workable commensurability between the system of rational apprehension that ordinary people use to make sense of the phenomena that they must recognize and handle appropriately in the domain of everyday life and the system of rational apprehension that scientists in a particular field must use to make sense of the phenomena in their professional domain.

Both systems are stocked with prototypes finely turned to enable the sort of recognition that negotiating the respective domains requires. 

But those prototypes are vastly different; or in any case, the ones the experts use are absent and very distinct from anything that exists in the inventory of patterns and templates of the ordinary, intelligent person. 

These special-purpose expert prototypes (acquired through training and professionalization and experience) are what allow the expert to see reliably what others in his or her field see, and thus to participate in the sharing and advancement of knowledge in that expert domain.

But enabling the ordinary nonexpert to see the things that science comes to know as experts use their specialized professional judgment is the whole point of science journalism!  

Necessarily science journalists must find some means of bridging the gap between the prototypes of the expert scientist and the everyday ones of the curious nonexpert so that the latter can form a meaningful apprehension of the amazing, and awe-inspiring insights that the former glean through science's methods of knowing.

This isn’t magic, in fact.

It is craft. Of the most impressive and admirable sort. 

It comprises norms that reliably populate the mind of the science journalist with prototypes and patterns of communication practices that achieve the amazing commensurability I’m talking about.

Science journalists generate these craft norms through their collective activity, and acquire them through experience.

But they aren’t static.  They evolve.

Moreover, they aren’t invisible.  They are matters that science journalists, like any other professionals, become keenly and acutely aware of as they do their jobs, and do them in concert with others with whom they discuss, and from whom they learn, their craft.

And like other professionals, science journalists are keenly interested in whether their craft norms are in order

In the account I’m giving, craft norms are the medium by which professional judgment is formed and through which it operates.

Like a method of scientific measurement, professional judgments need to be reliable: they must enable consistent, replicable, shared apprehension of the phenomena that are of consequence to members of the profession.

But like methods of scientific measurement they must also be valid.  The thing they are enabling those who possess them reliably, collectively, to apprehend and form judgments about must genuinely be the thing that those in the profession are trying to see.

In the case of the science journalist, that thing that must be seen—not just reliably but accurately—is how to make it possible for the nonexpert of ordinary science intelligence to form the most meaningful, authentic, true picture of the awesome things that are genuinely known to science.

Science journalists, like other professionals, are constantly arguing about whether their norms are valid in this sense. "Are we really doing what we want to do as best we can?," they ask themselves.

Actually, there is no sense of crisis in the profession (as far as I can tell). They know full well that in the main their craft norms are reliably guiding them to ways of communicating that actually work.

But there are plenty of particular matters—ones of genuine consequence—that they worry about, that they have different opinions on, that relate to whether particular things they are doing might actually be working less well than some alternative or maybe even frustrating their goals.

The last post touched on one of those things: In it I discussed Andrew Gelman’s critique of the passivity of science journalists in reporting on “WTF!” social science studies—ones that report remarkable, astonishing, unbelievable results that, in Gelman’s view, almost inevitably are shown to rest on a very basic methodological defect.

It’s not as if science journalists aren’t aware of that issue & filled with views about it!

What’s more, Gelman proposed a solution: interview lots of additional experts besides the study authors and find out if they think the study is valid.

Actually, science journalists talk about this too!  The issue isn’t just whether this is a feasible idea but whether it is actually a sound one given what the aim of science journalism is trying to do.

Gelman didn’t recognize that his prescription is bound up with the controversy over whether “balanced coverage”—a norm that enjoins science journalists to cover “both sides” and evince a posture of “neutrality” toward disputed scientific claims—actually contravenes the objective of helping the public form an accurate perception of what’s known by science, particularly on controversial issues like, say, evolution or climate change.

Which gets to another thing that I think was missing, not just from Gelman’s (excellent!) essay but from the discussion that science journalists, as a professional community, are constantly having.

The matters they are debating when they reflect on the validity of their craft norms are very often empirical ones.

The admit of empirical investigation.  Indeed, they demand it: members of a profession are no more able to determine through simple debate which of multiple plausible accounts of a phenomenon is true than are scientists

Scientists don’t just debate in that situation. They collect empirical evidence!

That’s what science journalists need to do too. 

They need to make their profession evidence-based—the need to create procedures for identifying craft-norm issues that admit of empirical testing and mechanisms institutions for collecting that evidence, transmitting, and reflecting in common on what that evidence reveals.

Not as a substitute for their craft-norm informed professional judgment—but as a self-consciously managed source of knowledge that they can use as they do what they participate in the process by which their craft norms are formed, evolve and are transmitted.

The need for an evidence-based culture in science journalism is one of the things I had in mind when I said that the points of connection between science journalism and science itself need to be strengthened.

In fact, it is the most important.  But there are other points worth mentioning—ones that it will be easier to explain now that this point is out there.

So I will say more. Later.                               

But the one last thing I will say is that science journalism is not the only profession that is committed to the transmission of scientific knowledge that, to its disadvantage, fails to use science’s way of knowing to advance its knowledge of how to transmit what science knows.

Indeed, science journalists are in a position to do a tremendous favor for those other professions by showing them how to remedy this problem.

Some might think, after decades of aggressive inattention to the science of science communication by those responsible for transmitting decision-relevant science in our democracy, that nothing short of magic will ever remedy our democracy’s deficit in science communication intelligence.

If so, then science journalists are the ones we need to show us how to pull this trick off.

Wednesday
Aug282013

Science and the craft norms of science journalism, Part 1: What Gelman says

One of  the most reliable signs that I had a good idea is that someone else has already come up with it and developed it in a more sophisticated way than I would have.

In that category is Stats Legend Andrew Gelman's recent essay in Symposium imploring science journalists to adopt a more critical stance in reporting on the publication of scientific papers.

Gelman suggests that the passivity of journalists in simply parroting the claims reflected in university press releases feeds into the practice among some scholars and accommodating journals to publish sensational, “what the fuck!” studies (a topic that Gelman has written a lot about recently; e.g., here & here & here)--basically findings that are just bizarre and incomprehensible and thus a magnet for attention.

Nearly always, he believes, such studies reflect bogus methods.

Indeed, the absence of any sensible mechanism of cognition or behavior for the results should make people very suspicious about the method in these studies. As Gelman notes, one can always find weird, meaningless correlations & make up stories afterwards about what they mean. Good empiricism is much more likely when researchers are investigating which of the multitude of plausible but inconsistent things we believe is really true than it is when they coming running in excitedly to tell us that bicep size correlates with liberal-conservative ideology.

Gelman's examples (in this particular essay; survey his blog if you want to get a glimpse of just how long and relentless the WTF! parade has become) include a recently published papers that purports to find “women’s political attitudes show huge variation across the menstrual cycle” (Psychological Science), that “parents who pay for college will actually encourage their children to do worse in class” (American Journal of Sociology), and “African countries are poor because they have too much genetic diversity” (American Economic Review), along with one of his favorites, Satoshi Kanazawa’s ludicrous study that “beautiful parents” are more likely to have female offspring (Journal of Theoretical Biology).

All these papers, Gelman argues, had manifest defects in methods but were nevertheless featured, widely and uncritically, in the media in a manner that Gelman believes drove their unsupported conclusions deeply and perhaps irretrievably into the recursive pathways of knowledge transmission associated with the internet.

Not surprisingly, Gleman says that he understands that science journalists can’t be expected to engage empirical papers in the way that competent and dedicated reviewers could and should (Gelman obviously believes that the reviewers even for many top-tier journals are either incompetent, lazy, or complicit in the WTF! norm).

So his remedy is for journalists to do a more through job of checking out the opinion of other experts before publishing a story (really just publicizing) a seemingly “amazing, stunning” study result:

Just as a careful journalist runs the veracity of a scoop by as many reliable sources as possible, he or she should interview as many experts as possible before reporting on a scientific claim. The point is not necessarily to interview an opponent of the study, or to present “both sides” of the story, but rather to talk to independent scholars get their views and troubleshoot as much as possible. The experts might very well endorse the study, but even then they are likely to add more nuance and caveats. In the Kanazawa study, for example, any expert in sex ratios would have questioned a claim of a 36% difference—or even, for that matter, a 3.6% difference. It is true that the statistical concerns—namely, the small sample size and the multiple comparisons—are a bit subtle for the average reader. But any sort of reality check would have helped by pointing out where this study took liberties. . ..

If journalists go slightly outside the loop — for example, asking a cognitive psychologist to comment on the work of a social psychologist, or asking a computer scientist for views on the work of a statistician – they have a chance to get a broader view. To put it another way: some of the problems of hyped science arise from the narrowness of subfields, but you can take advantage of this by moving to a neighbouring subfield to get an enhanced perspective. 

Gelman sees this sort of interrogation, moreover, as only an instance of the sort of engagement that a craft norm of disciplined “skepticism” or “uncertainty” could usefully contribute to science journalism:

 [J]ournalists should remember to put any dramatic claims in context, given that publication in a leading journal does not by itself guarantee that work is free of serious error. . ..

Just as is the case with so many other beats, science journalism has to adhere to the rules of solid reporting and respect the need for skepticism. And this skepticism should not be exercised for the sake of manufacturing controversy—two sides clashing for the sake of getting attention—but for the sake of conveying to readers a sense of uncertainty, which is central to the scientific process. The point is not that all articles are fatally flawed, but that many newsworthy studies are coupled with press releases that, quite naturally, downplay uncertainty.

The bigger point . . . is that when reporters recognize the uncertainty present in all scientific conclusions, I suspect they will be more likely to ask interesting questions and employ their journalistic skills.

So these are all great points, and well expressed. Like I said, I had some ideas like this and I’m sure the marginal value of them, whatever that might have been, is even smaller than it would have been in view of the publication of  Gelman’s essay.

But in fact, they are a bit different from Gelman's.

I think in fact that his critique of science journalism pasivity rests on a conception of what science journalists do that is still too passive (notwithstanding the effortful task he is proposing for them).  
I also think--ironically, I guess!--that Gelman's account is inattentive to the role that empirical evidence should play in evaluating the craft norms of science journalism; indeed, to the role that science journalists themselves should play in making their profession more evidence based!

Well, I'll get into all of this-- in parts 2 through n of this series.

Thursday
Aug222013

What do alternative sanctions mean *now*?

Back when I was in jr high school & an “assistant” professor at the University of Chicago Law School (where I had an office between Larry Lessig’s & Elena Kagan’s on same floor of the library as Tracey Meares & Cass Sunstein  & where Liz Chaney was in the first group of 1st yr law students I taught, & a kid named Barrack Obama, who was insanely running for Congress against an unbeatable incumbent, taught those same students about the Equal Protection Clause of the Constitution), I wrote an article entitled “What Do Alternative Sanctions Mean?”

The article had a section offering a qualified, pragmatic defense of “shaming” penalties—conditions of probation, really, that involved engaging in or submitting to some ritualistic and frankly self-debasing publicization of one’s offense: taking out a newspaper advertisement proclaiming “I sold drugs with my kids in the car,” or displaying a recognizable “DUI” marker on one’s license plate, or standing in front of a store with a sign announcing that one had been caught shoplifting, or having to prepare & circulate to other registered lobbying a long “how ‘not to’ ” manual on compliance with Ethics in Govt Act regulations illustrated with first-hand accounts of the numerous violations one had committed.

Surprisingly, that proposal got a fair amount of attention. George Will wrote a column, and the NY Times an op-ed about it (they both liked the idea!). I got to be on the “Today Show” (woo hoo!) & be interviewed by Bryant Gumbel (it was after he retired as host but he was filling in for I can’t remember who).  All kinds of people—from earnest judges looking for something to do besides send people to jail; to other academics wanting to show I was wrong (some for genuinely interesting reasons); to lazy journalist recycling the same story that 15 others had already written; to publicity-mad megalomaniacs using their own personal tragedies w/ abducted children as the occasion for grabbing attention as the organizers of mindless national movements of one sort of or another; to “popular book” publishers—kept wanting to talk about the idea, and necessarily wanted me to keep talking about it.

I got bored quickly & moved on (more or less).

Maybe it was the desire to put as much distance between myself and the intellectual sterility of the spectacle, though, that explains how I missed something that looks like something truly amazing: the dissipation of the cultural meanings that have historically underwritten the dominance of imprisonment as a form of punishment in our criminal justice system.

That’s what the article—What Do Alternative Sanctions Mean?—was actually about.

That is, it was about was why imprisonment persisted as a penalty for so many nonviolent (or relatively nonviolent) offenders who criminal justice experts all agreed needn’t be incarcerated.

For these offenders—property misappropriators, drunk drivers, petty drug dealers and users, white collar offenders, and others, who in total seemed to make up about 50% of the population of people behind bars—“alternative sanctions” such as fines and community service would be just as effective in protecting the community & cost a whole lot less

There was compelling empirical data—expert consensus, even--to back this claim up. And the experts presenting this evidence included not just wonky public policy analysts but an ideologically diverse array of advocates, including civil-libertarian groups concerned with the needless destruction of liberty and conservative, economic ones protesting the wasteful expenditure of resources.

But the public consistently rejected what the experts had to say. How frustrating! The obvious remedy was always another article, another book presenting all the same data, and then adding to it, in the expectation/hope that eventually the public would “get the message” or “understand” the math etc & fall into line with the obviously rational solution.

My thesis was that the expert account was missing something: the importance of social meanings.

The public, I argued, expects punishments not just to protect them from harm, or to visit some quantum of “disutility” on offenders, but to express an appropriate attitude toward.

Drawing on the work of philosophers Joel Feinberg & Jean Hampton, and fortifying the account with one or another source in psychology and sociology and with diverse casual forms of real-world evidence (the account was entirely synthetic, an exercise in pragmatic conjecture identified as such & intended to invite more rigorous empirical testing, a modest amount of which has been done), I argued that criminal wrongs just were public acts that understood to be manifesting false claims about the value of persons, goods, and states of affairs relative to the interests or goals of the offender.

In the face of such actions, it was incumbent on anyone committed to a true or morally appropriate valuation of those same things to manifest the same by doing something that would unambiguously convey that valuation.

That’s what punishment is for.

That’s what punishment is: a setback of some kind, imposed by an agent authorized to speak for the political community, that expresses condemnation of the offender and thereby expresses the community’s recognition of the true worth of the person, good, or other interest that the wrongdoer’s own actions have denied.

The problem with the conventional alternative sanctions, I argued, was that they didn’t express condemnation—or express it as clearly and unequivocally as imprisonment.

A fine seems to attach a price tag to a criminal act.  And as much as we might think that charging a high price makes a consumer suffer, we don’t condemn someone for buying what we are willing to sell!

Community service, I argued, conveys similarly dissonant meanings. We don’t ordinarily condemn people who repair dilapidated low-income housing, offer free medical service for uninsured people suffering from diseases like AIDS, help to educate or otherwise enrich the lives of mentally challenged citizens, and the like; we admire them for it! 

Accordingly, when the state purports to “punish” an embezzler, or a sex offender, or toxic waste dumper by ordering him to perform such services, the public doesn’t understand the law as sincerely condemning him. It doesn’t understand those sorts of “alternative sanctions” as “punishments at all.”

Imprisonment, in contrast, clearly expresses condemnation.  Because of the cultural significance of liberty as a symbol of the respect that that the state is obliged to show for autonomous, reasoning individuals, taking it away from someone unambiguously conveys the attitude that someone has done something that has forfeited his or her entitled to be respected by other free, reasoning people.

It just doesn’t matter whether fines and community service “deter” just as well, or make offenders “suffer” just as much, as do short terms of imprisonment. They are unacceptable substitutes for imprisonment because they don’t convey the meaning that is essential to punishment.

If that’s the problem with the conventional alternative sanctions, I argued, then the solution is to find alternative alternatives that avoid the needless destruction of liberty and social wealth associated with imprisonment but nevertheless retain the power of imprisonment to express collective moral condemnation.

That’s where I said—why not take a look at shaming sanctions?

The story goes on. But like I said, it is boring. To me anyway.

But what’s not boring—what’s quite interesting—is that something seems to have changed.

Last week Attorney General Eric Holder delivered a major address denouncing the wastefulness—the moral mindlessness—of mass incarceration of petty drug offenders and others who needn’t be incapacitated for purposes of deterrence or protection of public safety.

He announced a series of discretionary federal law-enforcement policies—including ones relating to prosecutorial charging decisions, and the posture of the federal government in the consideration of parole determinations—that are all geared to steering the law away from imprisoning nonviolent offenders and reducing the time any of them end up serving.

What’s more, he indicated his intention to promote wider reliance on “the use of diversion programs – such as drug treatment and community service initiatives – that can serve as effective alternatives to incarceration.” 

None of this is new, of course. This is the very stance and very set of policies that criminal justice experts have been pushing, and the same grounds on which they have been pushing them, for decades without success.

What is startlingly different is that there seems to be genuine, political consensus that that is what to do.

The primary evidence of the consensus is not the rapturous applause with which Holder’s proposal has been greeted.

Rather it is the large, collective yawn.  Yes, of course.  Do that.  We have more important things to fight about—like climate change, and assault-rifle bans!

Actually, there is serious bi-partisan support by very serious, seriously informed and intelligent advocates for reform of criminal justice. David Dagan and Steven Teles write insightfully about it in the their Washington Monthly article The Conservative War on Prisons.

There’s a really cool, really consequential, really interesting movement afoot, or seems to be.

And although I’m sure it’s not as simple as this, it is in some sense true that “it just happened.”  The serious, important things that made this transformation take place were very low profile. 

Many people who I’m sure recognize how fundamentally different things look now didn’t actually see the things that happened that brought it about.

Someone will say, “Oh, it was the economic crisis.”

Oh, please.  It has always been a waste of money to put people in prisons. There has always been intense competition over how limited political resources were going to be spent.  If money were what mattered, this would have happened already—decades ago.

What’s more, ending the mindlessness of needlessly incarcerating thousands upon thousands of people who needn’t be imprisoned for public safety won’t make even a tiny dimple, much less a dent, in the massive public debt! 

Federal prison expenditures made up less than 0.1% (i.e., 0.001) of the FY13 budget.

State expenditures make up larger proportions of their budgets, but in most it is still in the order of 2-3%. Smart alternative sanctions would reduce the size of the prison population—maybe even by 50%. But that wouldn’t reduce the cost of operating prisons by only a fraction of that amount and would necessarily involve paying the cost of alternatives. 

Cost-savings are motivating part of the demand to reduce reliance on prisons—but only because the meaning of alternatives has changed in a way that makes the cost arguments more persuasive to people now.

BUT . . . have things really changed? It looks like it; but “bipartisan” support for reducing reliance on imprisonment actually existed in the 1970s, too, and was part of what motivated the enactment of the Federal Sentencing Guidelines (seriously; the mandatory minima were all added after enactment of the Guidelines).  It’s the public’s views that matter, and we shouldn’t confuse sensible things that politicians might be able to do during intervals the public isn’t paying attention to changes in public opinion.  Indeed, what they do then often either gets the public’s attention or is used strategically to inflame and divide. We’ll see.

But assuming this is real, the “meaning transformation” of alternatives to imprisonment is an important case study waiting to be constructed.

The collision between policy-relevant facts and contested social meanings is one of the most potent barriers that exists to enlightened democratic policymaking.  That’s the pathology that drives the mindless, wasteful thrashing on climate change—and many other issues.

Figuring out how to avoid instances of this pathology, I’m convinced, is the  most important task for the science of science of science communication.

But the second most is to figure out how to treat it when it has settled in.  How can policy-relevant facts that become entangled with antagonistic cultural meanings get disentangled—so that we can be confident that democratic deliberations will be informed by valid, decision-relevant science?

If that has happened here—even if more or less by accident—then we should figure out why, both so we enlarge our knowledge of how the social world works and so we can enlarge our power to manage it through democratic means in a way that enhances our collective welfare.

Monday
Aug192013

Who distrusts whom about what in the climate science debate?

I had the privilege of being part of a panel discussion last Fri. at the great “Scienceonline Climate” conference in Wash. D.C. The other panel members were Tom Armstrong,  Director of National Coordination for the U.S. Global Change Research  in the  Office of Science and Technology Policy; and Michael Mann, Distinguished Professor of Meteorology & Director, Earth System Science Center at Penn State Universitly; Author on the Observed Climate Variability and Change chapter of the Intergovernmental Panel on Climate Change (IPCC) Third Scientific Assessment Report in 2001; organizing committee chair for the National Academy of Sciences Frontiers of Science in 2003; and contributing scientist to the 2007 Nobel Peace Prize awarded to the IPCC. Pretty cool!

Topic was “Credibility, Trust, Goodwill, and Persuasion.”  Moderator Liz Neely (who expended most of her energy skillfully moderating the length of my answers to questions) framed the discussion around the recent blogosphere conflagration ignited by Tamsin Edwards’ column in Guardian.

Edwards seemed to pin the blame for persistent public controversy over what’s known about climate change on climate scientist’s themselves, arguing that “advocacy by climate scientists has damaged trust in the science.”

Naturally, her comments provoked a barrage of counterarguments from climate scientists and others, many of whom argued that climate scientists are uniquely situated to guide public deliberations into alignment with the best available scientific evidence.

All very interesting!

But I have a different take from those on both sides. 

Indeed, the take is sufficiently removed from what both seem to assume about how scientists' position-taking influences public beliefs about climate change and other issues that I really just want to put that whole debate aside.

Instead I'll rehearse the points I tried to inject into the panel discussion (slides here).

If I can manage to get those points across, I think it won’t really be necessary, even, for me to say what I think about the contending claims about the role of “scientist advocacy” in the climate debate.  That’ll be clear enough.

Those points reduce to three:

1. Members of the public do trust scientists.

2. Members of culturally opposing groups distrust each other when they perceive their status is at risk in debates over public policy.

3. When facts become entangled in cultural status conflicts, members of opposing groups (all of whom do trust scientists) will form divergent perceptions of what scientists believe.

To make out these three points, I focused on two CCP studies, and an indisputable but tremendously important and easily ignored fact.

hi! click me!! Please?!The first study examined “who believes what and why” about the HPV vaccine. In it we found that members of the cultural groups who are most polarized on the risks and benefits of the HPV vaccine both treat the positions of public health experts as the most decisive factor.

Members of both groups have predispositions—ones that both shape their existing beliefs and motivate them to credit and discredit evidence in selectively in patterns that amplify polarization when they are exposed to information.

But members of both groups trust public health experts to identify what sorts of treatments are best for their children. They will thus completely change their positions if a trusted public health expert is identified as the source of evidence contrary to their cultural predispositions.

Of course, members of the public tend to trust experts whose cultural values they share. Accordingly, if they are presented with multiple putative experts of opposing cultural values, then they will identify the one whom they (tacitly!) perceive has values closest to their own as the real experts—the one who really knows what he’s talking about and can be trusted—and do what he (we used only white males in the study to avoid any confounds relating to race and gender) says.

me! me! click me!!!There is only one circumstance in which these dynamics produce polarization: when members of the public form the perception that the position they are culturally predisposed to accept is being uniformly advanced by experts whose values they share and positions they are culturally predisposed to reject are being uniformly advanced by experts whose values they reject.

That was the one we got in the real world...

The second study examined “cultural cognition of scientific consensus.” In that one, we examined how individuals identify expert scientists on culturally charged issues—viz., climate change, gun control, and nuclear waste disposal.

We found that when shown a single scientist with credentials that conventionally denote expertise —a PhD from a recognized major university, a position on the faculty of such a university, and membership in the National Academy of Sciences—individuals readily identified that scientist as an “expert” on the issue in question.

resistance is futile ... click ...But only if that scientist was depicted as endorsing the position that predominates among members of the subjects’ own cultural group. Otherwise, subjects dismissed the scientists’ views on the ground that he was not a genuine “expert” on the topic in question.

We offered the experiment as a model of how people process information about what “expert consensus” is in the real world.  When presented with information that is probative of what experts believe, people have to decide what significance to give it.  If, like the vast majority of our subjects, they credit evidence that is genuinely probative of expert opinion only when that evidence (including the position of a scientist with relevant credentials) matches the position that predominates in their cultural group, they will end up culturally polarized on what expert consensus is.

DO NOT CLICK ME!!Our study found that to be the case too. On all three of the risk issues in question—climate change, nuclear waste disposal, and laws allowing citizens to carry concealed hand guns—the members of our nationally representative sample all believed that “scientific consensus” was consistent with the position that predominates in their cultural group. They were all correct, too—1/3 of the time, at least if we use National Academy of Science expert consensus reports as our benchmark of what “expert consensus” is.

So--

These studies, I submit, support points (1)-(3). 

No group's members understand themselves to be taking positions contrary to what expert scientists advocate.  They all believe that the position that predominates in their group is consistent with the views of expert scientists on the risks in question.

In other words, they recognize that science is a source of valid knowledge that they otherwise couldn’t obtain by their own devices, and that in fact one would have to be a real idiot to say, “Screw the scientists—I know what the truth is on climate, nuclear power, gun control, HPV vaccine etc & they don’t!”

That’s the way members of the public are.  Some people aren’t like that in our society—they don’t trust what scientists say on these kinds of issues. But they are really a teeny tiny minority (ordinary members of the public on both sides of these issues would regard them as oddballs, whack jobs, wing nuts, etc).

The tiny fraction of the population who “don’t trust scientists” aren’t playing any significant role in generating public conflict on climate or any of these other issues.

The reason we have these conflicts is because positions on these issues have become symbols of membership in, and loyalty to, the groups in question

Citizens have become convinced that people with values different from theirs are using claims about danger and risk to advance policies that intended to denigrate their way of life and make them the objects of contempt and ridicule.  As a result, these debates are pervaded by the distrust that citizens of opposing values have for one another when they perceive that a policy issue is a contest over the status of contending cultural groups.

When that happens, individuals don’t stop trusting scientists.  Rather, as a result of cultural cognition and like forms of motivated reasoning, they (all of them!) unconsciously conform the evidence of “what expert scientists believe” to their stake in protecting the status of their group and their own standing within it.

That pressure, moreover, doesn’t reliably lead them to the truth.  Indeed, it makes it inevitable that individuals of diverse outlooks will all suffer because of the barrier it creates betweeen democratic deliberations and the best available scientific evidence.

As I indicated, I also relied on a very obvious but tremendously important and easily ignored fact: that this sort of entanglement of “what scientists believe” and cultural status conflict is not normal.

It is pathological, both in the sense of being bad and being rare.

The number of consequential insights from decision-relevant science that generate cultural conflict is tiny—miniscule—relative to the number that don’t. There’s no meaningful cultural conflict over pasteurization of milk, high-power transmission lines, flouridation of water, cancer from cell phones (yes, some people in little enclaves are arguing about this—they get news coverage precisely because the media knows viewers in most parts of the country will find the protestors exotic, like strange species in zoo) or even the regulation of emissions from formaldehyde, etc etc etc etc. 

Moreover, there’s nothing about any particular  issue that makes cultural conflict about “necessary” or “inevitable.”  Indeed, some of the ones I listed are sources of real cultural conflict in Europe; all they have to do is look over here to see that things could have been otherwise.

And all we have to do is look around to see that things could have been otherwise for some of the issues that we are culturally divided on.

The HBV vaccine—the one that immunizes children against Hepatitis b—is no different in any material respect from the HPV vaccine.  Like the HPV vaccine, the HBV vaccine protects people from a sexually transmitted disease. Like the HPV vaccine, it has been identified by the CDC as appropriate for inclusion in the schedule of universal childhood vaccinations.  But unlike the HPV vaccine there is no controversy—cultural or otherwise—surrounding the HBV vaccine. It is on the list of “mandatory” vaccinations that are a condition of school enrollment in the vast majority of states; vaccinate rates are consistently above 90% (they are less than 30% in the target population for HPV) – and were so every year (2007-2011) in which proposals to make the HPV vaccine mandatory was a matter of intense controversy throughout the U.S.

The introduction of subsequent career of the HBV vaccine has been, thankfully, free of the distrust that culturally diverse groups experience toward each other when they are trying to make sense of what the scientific evidence is on the HPV vaccine.  Accordingly, members of those groups, all of whom trust scientists, are able reliably to see what the weight of scientific opinion is on that question.

So want to fix the science communication problem?

Then for sure deal with the trust issue!

But not the nonexistent one that supposedly exists between scientists and the public. 

The real one--between opposing cultural groups locked in needless, mindless, illiberal forms of status conflict that disable the rational faculties that ordinary citizens of all cultural outlooks ordinarily and reliably use to recognize what is known to science.

Tuesday
Aug132013

So what is "the best available scientific evidence" anyway?

A thoughtful person in the comment thread emanating (and emantating & emanating & emanating) from the last post asked me a question that was interesting, difficult, and important enough that I concluded it deserved its own post.

The question

... in your initial post you mention "best available evidence" no less than six times. And you may also have reiterated the phrase in some of your comments.

Perhaps you have identified your criteria for determining what constitutes "best available evidence" elsewhere; but for the benefit of those of us who might have missed it, perhaps you would be kind enough to articulate your criteria and/or source(s) for us. 

It is a rather nebulous phrase; however, I suppose it works as a very confident, if not all encompassing, modifier.  But as far as I can see, your post doesn't tell us specifically what "evidence" you are referring to (whether "best available" or not!)

Is "best available evidence" a new, improved "reframing" of the so-called "consensus" (that is not really holding up too well, these days)? Is it simply a way of sweeping aside the validity of any acknowledgement/discussion of the uncertainties? Or is it something completely different?!

My answer:

Well, to start, I most certainly do  think there is such a thing as "best available scientific evidence." Sometimes people seem to think “cultural cognition” implies that there “is no real truth” or that it is "impossible for anyone to say becaues it all depends on one's values" etc.  How absurd!

But I certainly don't have a set of criteria for identifying the “best available scientific evidence.” Rather I have an ability, one that is generally reliable but far from perfect, for recognizing it.  

I think that is all anyone has—all anyone possibly could have that could be of use to him or her in trying to be guided by what science knows.

For sure, I can identify a bunch of things that are part of what I'm seeing when I perceive what I believe is the best available scientific evidence.  These include, first and foremost, the origination of the scientific understanding in question in the methods of empirical observation and inference that are the signature of science's way of knowing.

Basic technique for recognizing the best available scientific evidenceBut those things I'm noticing (and there are obviously many more than that) don't add up to some sort of test or algorithm. (If you think it is puzzling that one might be able reliably to recognize things w/o being able to offer up any set of necessary and sufficient conditions or criteria for identifying them, you should learn about the fascinating profession of chick sexing!)

Moreover, even the things I'm seeing are usually being glimpsed only 2nd hand.  That is, I'm "taking it on someone's word" that all of the methods used are the proper and valid ones, and have actually been carried out and carried out properly and so on. 

As I said, I don't mean to be speaking only for myself here.  Everyone is constrained to recognize the best available scientific evidence.

That everyone includes scientists, too. Nullius in verba--the Royal Society motto that translates to "take no one's word for it"-- can't literally meant what it says: even Nobel Prize winners would never be able to make a contribution to their fields -- their lives are too short, and their brains too small--if they insisted on "figuring out everything for themselves" before adding to what's known within their areas of specialty.

What the motto is best understood as meaning is don't take the word of anyone except those whose claim to knowledge is based on science's way of knowing--by disciplined observation and inference-- as opposed to some other, nonempirical way grounded in the authority of a particular person's or institution's privileged insight.

Amen! But even identifying those people whose knowledge reflects science's empirical way of knowing requires (and always has) a reliably trained sense of recognition!

So no definition or logical algorithm for identification -- yet I and you and everyone else all manage pretty well in  recognizing the best available scientific evidence in all sorts of domains in which we must make decisions, individual and collective (and even in domains in which we might even be able to contribute to what is known through science).

I find this recognition faculty to be a remarkable  tribute to the rationality of our species, one that fills me with awe and with a deep, instinctive sense that I must try to respect the reason of others and their freedom to exercise it.

I understand disputes like climate change to be a consequence of conditions that disable this remarkable recognition faculty.

Chief among those is the entanglement of risks & other policy-relevant facts in antagonistic cultural meanings

This entanglement generates persistent division, in part b/c people typically exercise their "what is known to science" recognition faculty within cultural affinity groups, whose members  they understand and trust well enough to be able to figure out who really knows what about what (and who is really just full of shit).  If those groups end up transmitting opposing accounts of what the best available scientific evidence is on a particular policy-relevant fact, those who belong to them will end up persistently divided about what expert scientists believe.

Even more important, the entanglement of facts with culturally antagonistic meanings generates division b/c people will often have a more powerful psychic stake in forming and persisting in beliefs that fit their group identities than in "getting the right answer" from science's point of view, or in aligning themselves correctly w/ what the 'best scientific evidence is.”

After all, I can’t hurt myself or anyone else by making a mistake about what the best evidence is on climate change; I don’t matter enough as consumer, voter, “big mouth” etc. to have an impact, no matter what "mistake" I make in acting on a mistaken view of what is going on.

But if I take the wrong position on the issue relative the one that predominates in my group, and I might well cost myself the trust and respect of many on whose support I depend, emotionally, materially and otherwise.

The disablement of our reason – of our ability to recognize reliably (or reasonably reliably!) what is known to science --not only makes us stupid. It makes us likely to live lives that are much less prosperous and safe. 

It also has the ugly consequence of making us suspicious of one another, and anxious that our group, our identities, are under assault, and our status being put in jeopardy by the enactment of laws that, on their face seem to be about risk reduction, but that are regarded too as symbols of the contempt that others have for our values and ways of life.

Hence, the “pollution” of the “science communication environment” with these toxic cultural meanings deprives us of both of the major benefits of the Liberal Republic of Science: knowledge that we can use to improve our lives, individually and collectively; and the assurance that we will not, in submitting to legal obligation, be forced to acquiesce in a moral or political orthodoxy hostile to the view of the best life that we have the right as free and reasoning beings to choose for ourselves!

Well, I want to know, of course, what you think of all this.

But first, back to the questions that motivated the last post.

To answer them, I hope I've now shown you, you won't have to agree with me about what the "best available scientific evidence" is on climate change.  

Indeed, the science of science communication doesn't presuppose anything about the content of the best decision-relevant scientific evidence.  It assumes only two things: (1) that there is such a thing; and (2) that the question of how to enable its reliable apprehension by people who stand to benefit from it admits of and demands scientific inquiry. 

But here goes:

Climate skeptics (or the ones who are acting in good faith, and I fully believe that includes the vast majority of ordinary people -- 50% of them pretty much -- in our society who say they don't believe in AGW or accept that it poses significant risks to human wellbeing) believe that their position on climate change is based on the best available scientific evidence -- just as I believe mine is!

So: how do they explain why their view of what the best evidence on climate science is rejected by so many of their reasonable fellow citizens?

And what do they think should be done?

Not about climate change! 

About the science communication problem--by which I mean precisely the influences that are preventing us, as free reasoning people, from converging on the best available scientific evidence on climate change and a small number of other consequential issues (nuclear power, the HPV vaccine, the lethality of cats for birds, etc)? Converging in the way that we normally do on so many other consequential issues--so many many many more that no one could ever count them!?

I hope they have answers that aren't as poor, as devoid of evidence, as the ones in the blog post I critiqued, in which a skeptic offered a facile, evidence-free account of how people form perceptions of risk-- an account that turned on the very same imaginative, just-so aggregation of  mechanisms that get recycled among those trying without the benefit or hindrance of empirical studies of the same to explain why so many people don't accept scientific evidence on the sources and consequences of climate change.

I hope that they have some thoughts here, not because I am naive enough to think they -- any more than anyone on the other side -- will magically step forward and use what they know to dispel the cloud of toxic partisan confusion that is preventing us from seeing what is known here.

I hope that because I would like to think that once we get this sad matter behind us, and resume the patterns of trust and reciprocal cooperation that normally characterize the nonpathological state in which we are able to recognize the best available scientific evidence, there will be some better science of science communication evidence for us all to share with each other on how to to negotiate the profound and historic challenge we face in communicating what's known to science within a liberal democratic society.

 

Sunday
Aug112013

What "climate skeptics" have in common with "believers": a stubborn attraction to evidence-free, just-so stories about the formation of public risk perceptions

My aim in studying the science of science communication is to advance practical understanding of how to promote constructive public engagement with the best available evidence—not to promote public acceptance of particular conclusions about what that evidence signifies or public support for any particular set of public policies.

When I address the sources of persistent public conflict over climate change, though, it seems pretty clear to me that those with a practical interest in using the best evidence on science communication are themselves predominantly focused on dispelling what they see as a failure on the part of the public to credit valid evidence on the extent, sources, and deleterious consequences of anthropogenic global warming.

I certainly have no problem with that! On the contrary, I'm eager to help them, both because I believe their efforts will promote more enlightened policymaking on climate change and because I believe their self-conscious use of evidence-based methods of science communication will itself enlarge knowledge on how to promote constructive public engagement with decision-relevant science generally. 

Indeed, I am generally willing and eager to counsel policy advocates no matter what their aim so long as they are seeking to achieve it by enhancing reasoned public engagement with valid scientific evidence (and am decidedly uninterested, and adamantly unwilling, to help anyone who wants to achieve a policy outcome, no matter how much I support the same, by means that involve misrepresenting evidence, manipulating the public, or otherwise bypassing ordinary citizens' use of their own reasoning powers to make up their own minds).

One thing that puzzles me, though, is why those who are skeptical about climate change don’t seem nearly as interested in practical science communication of this sort.

Actually, it’s clear enough that climate skeptics are interested in the sort of work that I and other researchers engaged in the empirical study of science communication do. I often observe them reflecting thoughtfully about that work, and I even engage them from time to time in interesting, informative discussion of these studies.

But I don’t see skeptics grappling in the earnest—even obsessive, anxious—way that climate-change policy advocates are with the task of how to promote better public understanding.

That seems weird to me. 

After all, there is a symmetry in the position of “believers” and “skeptics” in this regard. 

They disagree about what conclusion the best scientific evidence on climate change supports, obviously. But they both have to confront that approximately 50% of the U.S. public disagrees with their position on that.

The U.S. public has been and remains deeply divided on whether climate change is occurring, why, and what the impact of this will be (over this entire period, there’s also been a recurring, cyclical interest in proclaiming, on the basis of utterly inconclusive tib bits of information, that public conflict is dissipating and being superseded by an emerging popular demand for “decisive action” in response to the climate crisis; I’m not sure what explains this strange dynamic).

The obvious consequence of such confusion is divisive, disheartening conflict, and a disturbingly high likelihood that popularly accountable policymaking institutions will as a result fail to adopt policies consistent with the best available scientific evidence.

Don’t skeptics want to do something about this?

A great many of them honestly believe that the best available evidence supports their views (I really don’t doubt this is so). So why aren’t they holding conferences dedicated to making sense of the best available evidence on public science communication and how to use that evidence to guide the public toward a state of shared understanding more consistent with it?

I often ask skeptics who comment on blog posts here this question, and feel like I am yet to get a satisfying answer.

But maybe my mystification reflects biased sampling on my part.

Maybe, despite my desire to engage constructively with anyone whose own practical aims involve promoting constructive public engagement with scientific evidence, I am still being exposed to an unrepresentative segment of the population who fit that description, one over-representing climate-change believers.

I happened across something that made me think that might be so.

It consists of a blog post from a skeptic who is trying to explain to others who share the same orientation why it is that such a large fraction of the U.S. population believes that climate change resulting from fossil fuel consumption poses serious risks to human wellbeing.

As earnest and reflective as the account was, this climate skeptic’s account deployed exactly the same facile set of just-so tropes—constructed from the same evidence-free style of selective synthesizing of decision-science mechanisms—that continue to dominate, and distort, the thinking of climate change believers when they are addressing the “science communication problem.”

Consider:

Why do people believe that global warming has already created bigger storms? Because when "experts" repeatedly tell us that global warming will wreck the Earth, we start to fit each bad storm into the disaster narrative that's already in our heads.

Also, attention-seeking media wail about increased property damage from hurricanes. . . .

Also, thanks to modern media and camera phones, we hear more about storms, and see the damage. People think Hurricane Katrina, which killed 1,800 people, was the deadliest storm ever. But the 1900 Galveston hurricane killed 10,000 people. We just didn't have so much media then.

Here they are, all the usual “culprits”: a “boundedly rational” public, whose reliance on heuristic forms of information-processing are being exploited by strategic misinformers, systematically biased by “unbalanced” media coverage and amplified by social media.

Every single element of this account—while plausible on its own—is in fact contrary to the best available evidence on public risk perception and the dynamics of science communication. 

  • Blaming the media is also pretty weak. The claim that "unbalanced" media coverage causes public controversy on climate change science is incompatible with cross-cultural evidence, which shows that US coverage is no different from coverage in other nations in which the public isn't polarized (e.g., Sweden). Indeed, the "media misinformation" claim has causation upside down, as  Kevin Arceneaux’s recent post helps to show. The media covers competing claims about the evidence because climate change is entangled in culturally antagonistic meanings, which in turn create persistent public demand for information on the nature of the conflict and for evidence that the readers who hold the relevant cultural identities can use to satisfy their interest in persisting in beliefs consistent with their identities. 
  • The “internet echo chamber” hypothesis is similarly devoid of evidence. There are plenty of evidence-based sources that address and dispel the general claim that the internet reinforces partisan exposure to and processing of evidence (sources that apparently can’t penetrate the internet echo chamber, which continues to propagate the echo-chamber claim despite the absence of evidence).

But here's one really simple way to tell that the blog writer's explanation of why people are overestimating the risks of climate change is patent B.S.: it is constructed out of exactly the same mechanisms that so many theorists on the other side of the debate imaginatively combine to explain why people are underestimating exactly the same risks. 

This is the tell-tale signature of a just-so story: it can explain anything one sees and its opposite equally well!

So what to say?

Well, it turns out that despite their disagreement about what the best scientific evidence on climate change signifies--about what the facts are, and about what policy responses are appropriately responsive to them—advocates in the “believer” and “skeptic” camps have some important common science communication interests.

They both have an interest in understanding it and using it, as I indicated at the outset.

But beyond that, they both have a stake in freeing themselves from the temptation to be regaled by story tellers, who, despite the abundance of evidence that now exists, remain committed to perpetually recycling empirically discredited just-so stories rather than making use of and extending the best available evidence on what the science communication problem consists in and how to fix it.

Thursday
Aug082013

Partisan Media Are Not Destroying America

At the risk of creating an expectation for edification that we'll never again approach satisfying, CCP Blog again brings you an exclusive guest post by a foremost scholarly expert on an issue that everyone everywhere is astonishingly confused about! The expert is political scientist Kevin Arceneaux of Temple University. The issue is whether partisan cable news and related media outlets are driving conflict over climate change and other divisive issues by misinforming credulous members of the public and otherwise fanning the flames of political polarization. I've questioned this widely held view myself (see, e.g., here & here.)  But no one listens to me, of course.  Well now Arceneaux--employing the novel strategy of actually bringing evidence derived from valid empirical methods to bear--will straighten everything out once and for all. His post furnishes a preview--again, exclusively for the 14 billion readers of the CCP Blog!--of his soon-to-be-published book, Changing Minds, Changing Channels (Univ. Chicago Press 2013), co-authored with Martin Johnson. (Psssst ... you can actually download a couple of chapters in draft right now for free! Don't tell anybody!)

Kevin Arceneaux:

There is little doubt that the American legislative process has become more partisan and polarized. But is the same true for the mass public? For the most part, it seems that most Americans remain middle of the road. Rather than becoming more polarized, people mostly seem to have brought their policy positions in line with their partisan identification.

Despite the empirical evidence, many—especially pundits—cannot shake the notion that Americans are becoming more politically extreme and divided. Not only do many in the chattering class take mass polarization as a self-evident fact, the culprit is equally self-evident: the partisan news media.

On some level, I understand why this is such a popular conclusion. If political elites are so polarized, and clearly they are, it only seems intuitive that the same must be true for the mass citizenry. What’s more, people tend to overestimate the effects of media content on others, and what is the mass public if not masses of other people.

Nonetheless, in our soon-to-be published book Changing Minds or Changing Channels, Martin Johnson and I challenge the conventional wisdom that Fox News and MSNBC are responsible for polarizing the country.

We must keep in mind that in spite of their visibility to people like us who are politically engaged, relatively few people tune into shows like The O’Reilly Factor or The Rachel Maddow Show. For instance, voter turnout in the 2012 presidential election was roughly 12 times the size of the top-rated partisan talk show audiences on Fox News and MSNBC.

More important, people choose to watch partisan news audiences. The type of person who gravitates to partisan news shows is more politically and ideologically motivated than those who choose to watch mainstream news or tune out the news altogether, partisan or otherwise. People are not passive or particularly open-minded when it comes to political controversies. Not only do they choose what to watch on television, but they also choose whether to accept or reject the messages they receive from the televisions shows they watch.

In short, two forces simultaneously limit and blunt the effects of partisan news media. First, partisan news shows cannot polarize—in a direct sense—the multitude of Americans who do not tune into these shows. Second, the sort of people who actively choose to watch partisan news are precisely the sort of people who already possess strong opinions on politics and precisely the sort of people who should be less swayed by the content they view on these shows.

Wait—you may be thinking—don’t studies conclusively show that Fox News viewers know less about foreign events and express more conservative opinions on important policy issues like climate change?

The fact that people select into partisan news audiences also makes it difficult to study the effects of these shows. If people tune into Fox News because they care more about domestic political debates than foreign events or because they have conservative views, we would expect them to know less about foreign policy and distrust climate scientists even if Fox News did not exist.

What these studies do not and cannot tell us is the “counterfactual”:  What would Fox News viewers know and believe about politics if we lived in a world without Fox News?

The counterfactual is, of course, unknowable, and the central goal of causal inference is finding a way to estimate it. It turns out that observational designs do a terrible job at this.

Consequently, Martin and I turned to randomized experiments to investigate the effects of partisan media. By randomly assigning subjects to treatment and control groups, we are able to simulate the counterfactual by creating equivalent groups that experience different states of the world (e.g., one in which they watch Fox News and one in which they do not).

Using randomized experiments to study media effects has a long and successful history.

However, without modifications, the standard experimental design that assigns one group to a control group (e.g., no partisan news) and another group to a treatment group (e.g., partisan news) would not help us understand how selectivity—these choices we know viewers are making—influences the effects of partisan news shows. Forced exposure experiments (as we call them) allow one to estimate the effects of media content under the assumption that everyone is exposed to it. The current media environment, rife with abundant choice, makes it impossible for anyone to assume even a majority of viewers are exposed to a type of program, let alone everyone .

So, we modified the forced exposure experiment in two ways, which I'll describe in turn.

The first modification involved creating of a research design we call the Selective Exposure Experiment to compare a world where people had to watch partisan news to one that more closely approximates the one in which we live, where people can choose to watch entertainment programming instead. This experimental design starts with the forced exposure experimental design as its foundation. We randomly assigned some people to watch partisan news and some people to a control group where they could only watch an entertainment show.

These conditions allow us to estimate the effects of partisan news if people had no choice but to watch it. To get at the effects of selectivity, we randomly assigned a final group of subjects to a condition where they could watch any of the programs in the forced exposure conditions at will. We gave these subjects a remote control and allowed them to explore the partisan news programs and entertainment shows just as they would at home. They were free to watch all of a show, none of it, or flip back and forth among shows if that’s what they wanted to do.

The Selective Exposure Experiments taught us that the presence of choice blunted the effects of partisan news shows. To take one example from the book, we conducted an experiment in which some people watched a likeminded, or proattitudinal, news program (e.g., a conservative watching Fox) about the health care debate back in 2010; others watched an oppositional, or counterattitudinal, news program (e.g., a liberal watching Fox) on the same topic; others watched basic cable entertainment fare, devoid of politics; and finally, a group of subjects were allowed to choose among these shows freely.

The figure below summarizes the results from this Selective Exposure Experiment. The bars represent how polarized liberals and conservatives are after completing the viewing condition.

Across a number of aspects in the health care debate—how people rate the major political parties to deal with the issue, the personal impact of the policy, and the wisdom of the public opinion, individual mandate, and plan to raise taxes on the wealthy—forced exposure to both pro- and counterattitudinal shows increased polarization. So, it is clear that partisan shows can polarize.

However, subjects in the choice condition were much less polarized. Keep in mind that subjects in the choice condition only had four options from which to choose. Had we given subjects over 100 channels to choose from, as is commonplace in most households today, we can only imagine that these effects would have been even smaller.

Figure 4.2 in Arceneaux and Johnson (2013)

Next, we wished to sort out why we observed smaller effects in the choice condition. Undoubtedly, part of the explanation has to be that with fewer people watching, one should observe smaller overall effects. Recall, though, that we also anticipate that those who seek out partisan news—news-seekers as Markus Prior calls them—should be less susceptible to partisan news effects.

It was to investigate this hypothesis that we devised our second modification of  the standard forced-exposure experiment. 

In a design we call the Participant Preference Experiment, we measured people’s viewing preferences before randomly assigning them to view a proattitudinal, counterattitudinal, or entertainment show. Measuring viewing preferences before exposure to the stimuli allows us to gauge whether news-seekers react differently to partisan news than entertainment-seekers.

The figure below shows the results from one of these experiments. The news programs in these experiments focused on the controversy around raising taxes on the top income earners. Across a number of issue questions on the topic, we find that partisan news shows do more to polarize entertainment-seekers forced to watch the partisan news program than it does among news-seekers who often watch these shows.

Figure 4.4 in Arceneaux and Johnson (2013)

Note that the proattitudinal program had almost no effect on news-seekers, while the counterattitudinal show did. If people tend to gravitate toward likeminded news programming and entertainment seekers tend to tune out news, then these findings suggest that the direct effects of partisan news should be minimal.

As an aside, notice that the counterattitudinal news programming across all of these studies, if anything, polarizes those who are forced to watch it. Not only is this finding consistent with our thesis that people are not passive, blank slates (they can reject messages with which they disagree!), but it also undermines the Pollyanna notion that if people would just listen to the other side, the country would be a more tolerant and moderate place.

Finally, let me be clear that Martin and I are not arguing that partisan news shows have no effects. For one, they seem to lead many people to perceive that the country is more polarized, even if it isn’t. For another, they may have indirect effects on politics by energizing viewers (if not changing their minds) to contact their elected officials and vocalize their extreme opinions. Fox and MSNBC may indeed be a polarizing force in politics, but it is unlikely that it is causing masses of people to be more and more extreme.

Wednesday
Aug072013

More on disgust: Both liberals and conservatives *feel* it, but what contribution is it really making to their moral appraisals?

It’s been far far too long-- over a week!-- since we discussed disgust and its relationship to political ideology.  Part of the reason is that after the guest post by Yoel Inbar, the prospects for finding someone who could actually say anyting that would enlarge the knowledge of this site's 14 billion regular readers (NOTE: JOKE; DO NOT CIRCULATE OR ATTRIBUTE “14 billion" FIGURE) seemed extremely remote.  But we did it! Today, yet another sterling guest post on this topic from Dr. Sophie Russell, a psychologist at the University of Surrey. 

Russell has published a number of extremely important studies on the contribution that emotions make to moral judgment. She also is the co-auhtor—along with Roger Giner-Sorrola, another leading moral psychologist who has collaborated with Russell in the study of disgust—of an important review paper that concludes that disgust is a highly unreliable source of moral guidance generally and a source of moral perception distinctively inimical to the values of a “liberal society because it ignores factors . . . such as intentionality, harm, and justifiability.” That paper figured in the interesting discussion of Inbar’s essay.  Now she offers her own views: 

Sophie Russell:

Sophie RussellSo, is disgust reserved for conservatives? My answer to this question is no.  But rather, liberals and conservatives may show differences in their associations between disgust and moral judgement.

People feel disgust toward many different acts (such as incest, sexual fetishes, eating lab grown meat etc.), but this does not necessarily mean that they think it is morally wrong too.

I think what we should be asking ourselves is how easily can individuals separate their feelings of disgust from judgements of wrongdoing.

One thing that is clear from some of our research is that disgust has a different relationship with moral judgement than anger, in terms of how intertwined they are.  For example, we have found that after individuals consider the current context they change their feelings of anger but not their feelings of disgust toward harmful acts and bodily norm violations, and changes in anger relate to changes in moral judgement (Russell & Giner-Sorolla, 2011).

In another line of research we have also found that feelings of anger are associated with the ability to come up with mitigating circumstances for immoral acts but disgust is unrelated to whether or not people can imagine mitigating circumstances(Piazza, Russell, & Sousa, 2012). The story from both lines of research is that in general people can disentangle their feelings of disgust from judgements of wrongness, while this is not the case with anger.  It seems as if their feelings of disgust remain.  So, should we care if someone finds something disgusting? I think we should still be concerned about this because disgust is a withdrawal emotion, so people will still want to avoid the person or thing they may find disgusting, they just may not have the moral conviction that others need to agree with them.

Our findings follow on from a long laundry list of appraisals that work to make sure that anger is properly directed, such as: Is the behaviour justified; Is the behaviour intentional? Is the behaviour harmful, Is the behaviour unfair etc. (see Russell & Giner-Sorolla, 2013 for a review). It is less clear how we assess if something is disgusting depending on the current context; that is, what is the essence or concept that makes something disgusting in a given context. It seems as if judgements of disgust are tied to the specific person or object whilst anger is associated with more abstract appraisals of the current situation.

Supporting this distinction through the analysis of post-hoc justifications, we have found that people find it very hard to articulate why they think non-normative sexual acts are disgusting (Russell & Giner-Sorolla, 2011).

I think this effect will be the same for both conservatives and liberals because essentially this phrase ‘X is disgusting’ serves a very strong communicative function and we are not pushed/motivated to explain what we mean.  For this reason we may use this phrase towards things that are not literally evoking the disgust emotion, in order to signal that we want to break off all ties from this thing.

Both conservatives and liberals use this phrase frequently because of its potency, but this phrase does not necessarily mean that they actually feel physical revulsion.

I think another difference between anger and disgust that can cause a divide between conservatives and liberals is that anger is mainly relevant when there is a clear victim while disgust is relevant to “victimless” acts between consenting individuals (Piazza & Russell, in preparation).  

For example, in this research we looked at the impact of individuals giving consent toa range of sexual behaviours, such as necrophilia, incest, and sexual relations with a transgender individual. We found that people feel significantly more anger toward a wrongdoer when consent is absent versus present, and this relationship is mediated by justice appraisals.

On the other hand, individuals feel significantly more disgust when the recipient of wrongdoing consents to action versus not, thus, we feel disgust towards both people that consented to the act. This relationship is mediated by judgments of perverse character, which supports the view that disgust is based on judgments of the person or object, rather than the outcome or situation.  Thus, it seems as if anger is the more relevant emotion when there is a clear victim.

So, my conclusion is that for both liberals and conservatives, disgust is focused on the person while anger is focused on the circumstances and consequences, which is problematic if we want people to consider changes across time, context, and relationships.

On a separate note, something that is also interesting to me and I would like to leave with you,  is that when I include things like political orientation or disgust sensitivity as moderators when I conduct studies in the UKI find that they have very little to no influence on the effects that I find. However, if I include them whilst collecting an American Mturk sample they gain importance. So, I am really interested to know what you think about this.

References

Piazza, J., Russell, P.S. & Sousa, P. Moral emotions and the envisaging of mitigating circumstances for wrongdoing. Cognition & Emotion 27, 707-722 (2012).

Tuesday
Aug062013

Homework assignment: what's the relationship between science literacy & persistent political conflict over decision-relevant science?

I've agreed to do a talk at the annual American Geophysical Union in December. It will be part of a collection on "climate science literacy."

Here's the synopsis I submitted:

The value of civic science literacy

The persistence of public conflict over climate change is commonly understood to be evidence of the cost democracy bears as a result of the failure of citizens to recognize the best available decision-relevant science. This conclusion is true; what’s not is the usual understanding of cause and effect that accompanies this perspective. Ordinarily, the inability of citizens to comprehend decision-relevant science is identified as the source of persistent political conflict over climate change (along with myriad other issues that feature disputed facts that admit of scientific investigation). The truth, however, is that it is the persistence of public conflict that disables citizens from recognizing and making effective use of decision-relevant science. As a result, efforts to promote civic science literacy can’t be expected to dissipate such conflict. Instead, the root, cultural and psychological sources of such conflict must themselves be extinguished (with the use of tools and strategies themselves identified through valid scientific inquiry) so that our democracy can realize the value of educators' considerable skills in making citizens science literate. 

I have ideas along these lines -- ones that have figured in various papers I've written, informed various studies I've worked on, and appeared in one or another blog posts on this site.

But I haven't come close to working all this out.  

What's more, I worry (as always) that I could be completely wrong about everything.

So I welcome reflections by others on the basic claim expressed here-- reflections on how to convey it effectively; on what to do about the practical problem it reflects; but also on how to continue to probe and test to see whether it is true and to help identify any alterative account that's even more well founded and that furnishes an even more useful guide to action.

So get going-- don't put this off until the day before the talk & pull an all nighter! 

 

Monday
Aug052013

Can we SENCERize the communication of science?

I had the tremendous privilege—which yielded an even larger benefit in enlargement of personal knowledge—of being able to participate in the SENCER summer institute at Santa Clara University last week.

SENCER—which stands for Science Education for New Civic Engagements and Responsibilities—is an integrated set of practical research initiatives aimed at promoting the development and use of scientific knowledge on how to teach science.  It is actually one of a family of programs create to carry out the broader mission of the National Center for Science and Civic Engagement, “to inspire, support, and disseminate campus-based science education reform strategies that strengthen learning and build civic accountability among students in colleges and universities.”

It’s not amusing that those job it is to impart knowledge on empirical methods so infrequently even ask themselves whether their own methods for doing so—from the mode of teaching they use in the classroom to the materials and exercises they assign to students to the examinations they administer to test student comprehension—are valid and reliable.

On the contrary, it’s an outright scandal that demeans the culture of science.

SENCER comprises a sprawling, relentless, and expanding array of resources aimed at dissolving this embarrassing contradiction. These include a growing stockpile of empirical research findings; a trove of practical materials designed to enable use of this knowledge to improve science education; the sponsorship of regular events at which such knowledge is shared and plans for enlarging it formulated; a set of regional centers that coordinate efforts to promote evidence-based methods in the teaching of science; and most important of all a critical mass of intelligent and passionate people committed to the program’s ends.

The occasion for SENCER—the peculiar insularity of a craft dedicated to propagating valid empirical methods from empirical evidence relating to the realization of its own goals—is not unique to science education.

It is at the root, too, of what I have called the science communication problem—the failure of ample, compelling, readily accessible and indeed widely disseminated evidence to quiet persistent public controversy over risks and other facts to which that evidence directly speaks. Climate change is, of course, the most conspicuous example of the science communication problem but it is hardly the only consequential instance of it.

Immense resources are being dedicated to solving this problem and appropriately so.

But the aggressive resistance to evidence-based practice that pervades the climate-change advocacy community and their counterparts on other issues means that the vast majority of these resources are simply wasted. 

I’m not kidding: hundreds and hundreds of millions of dollars are foreseeably expended on programs that are certain not to have any positive impact (aside from raising the profile of those who operate the programs)—not so much because the initiatives being sponsored are ill-considered (although many indisputably are!) but because those who are being awarded the money to carry them out aren’t genuinely committed (or maybe just not genuinely capable) of considering empirical evidence. 

They don’t meaningfully engage existing evidence on communication dynamics to determine what psychological and political mechanisms their initiatives presuppose and what is known about those mechanisms.

They don’t carry out their initiatives in a manner that is geared to generating what might be called programmatic evidence in the form of pretest results or early-return data that can be used to refine and calibrate communication efforts as they are unfolding.

And worst of all, they lack any protocols that assure information on the impact of their efforts (including the lack thereof) is collected, preserved, and freely distributed in the manner that enables the progressive accretion of knowledge.

Instead, every surmise from every source—no matter how innocent of the conclusions of those who have previously used scientific methods to test theirs—is created equal in the world of science communication advocacy. 

Everyday is a new day, to be experienced free of the burden to take seriously what was learned (from failure as well as success) the day before.

I have written a paper about this.

So has Amy Luers, in a perceptive, evidence-informed article in Climatic Change that was addressed specifically to the foundations that are the primary sources of support for efforts to promote constructive engagement with climate science.

Her article is evidence of a heartening awareness that the evidence-free culture that has characterized science communication in this area of public policy and others is barren of the supportive practices and habits and outlooks that nourish growth of empirical knowledge.

Maybe things will change.

But there are still other science-communication professions that are puzzlingly—unacceptably, intolerably!—innocent of science in their own operations.

Science journalism—including (here) popular science writing and science documentary production as well as science news writing—is one. 

I have said before that I regard these professionals with awe—and gratitude, too.  Much as the bumblebee defies the calculations of physicists who insist that their capacity for flight defies physical laws, so science journalists seem to defy basic mechanisms of psychology by creating a form of commensurability in understanding that enables the curious nonscientist to participate in—and thus experience the wonder of—what scientists, by applying their highly specialized knowledge, discover about the mysteries of nature.

There is no communication alchemy involved here. Using a form of professional judgment exquisitely tuned by experience, the science journalist mines the fields of common cultural understanding for the resources needed to construct this remarkably engineered bridge of insight.

Yet how to do what they do is a matter that constantly confronts the members of this special profession with factual questions that they themselves do not have confident answers to—or have confident but conflating opinions about.

Do norms of journalistic neutrality—such as “balanced” coverage of science issues that generate controversy, within science or without—distort public understanding or help inform curious individuals of the nature of competing claims?

Is the segment of the population that experiences wonder and awe at scientific discovery more culturally diverse than the one than the current regular consumers of the highest quality science documentaries? If so, do those programs convey meanings collateral to their core, scientific content that constrain the size and diversity of their audience?

(These are issues that figured, actually, in two of the sessions of my Science of Science Communication course from last spring; I am delinquent in my promise to report on the nature of those sessions.)

These are empirical questions, ones the answers to which would be made better if journalists had evidence generated specifically to informing the ongoing collective discussion and practice that are the source of their craft knowledge.  But instead, we see here, too, the sort of “every-conjecture-created-equal,” “every-day-a-new-day” style of engagement that is the signature of evidence-free, nonscientific thought that by its nature is incapable of creating incremental enlargement of knowledge.

I could go on; not just about science journalism, but about many other evidence-or science-communication professions that are evidence-free about the nature of their own practices. Like the law, e.g.

But the point is that these professions, too, are ripe for SENCERizing.  They need to be fortified with the sorts of resources and programs that SENCER comprises.  And to get that fortification they require a core of practitioners who not only agree with this philosophy—I think they all already have them, actually—but also structures of collective action that will, through the dynamics of reciprocity, create the self-reinforcing contributions of those practioners to those resources and programs.

SENCER itself might be well be a vehicle for such developments.  It’s gracious invitation to me to participate in its summer institute reflects the interest of its members in enlarging the scope of their endeavor to the communication of decision-relevant science.

But it would be a mistake to think that SENCERizing science communication generally means relying on SENCER, or SENCER alone, to facilitate the advent of evidence-based practices within the relevant science-communication professions.

The remarkable founder of SENCER, Wm. David Burns, made this clear to me, in fact.

I asked him if he himself regarded the program as an “engine for” or a “model of” what needs to be done to make science education and science communication generally more evidence based.

He answered that the only appropriate way to think of SENCER is as an “experiment” of a fractal nature: by enabling those who believe science education must be evidence based to continuously form, refine, and test competing conjectures about how to build on and refine their knowledge of how to effectively impart scientific knowledge, SENCER itself is a test of a hypothesis that the particular mode of organization that it is and will become in such a process is an effective way to achieve its own ends.

SENCER, then, is surely a model (an iterative, self-updating one at that!) of the style of conjecture and refutation that is the engine that drives scientific discovery.

And such a model is necessarily one that cannot be reduced to a particular form or formula. For the very logic on which its own success is founded consists in the continuous engagement of competing models, whose successive remedies for one another's inevitable imperfections are what continuously make us smarter than we were before.

Sunday
Aug042013

Weekend update: Yale professor does *what*, you say?

Maybe @Paul Mathews has a point after all, but I think the commenter who offered to sell the Brooklyn Bridge to the author of this blog post has the better of the argument (you might have thought the title of my post would have given him a clue as well).

Needless to say, I am a tad anxious about Preet Bharara getting wind of all this...

Wednesday
Jul312013

Motivated system 2 reasoning--experimental evidence & its significance for explaining political polarization

My paper Ideology, Motivated Reasoning, and Cognitive Reflection was published today in the journal Judgment and Decision Making.

I’ve blogged on the study that is the focus of the paper before.  In those posts, I focused on the relationship of the study to the “asymmetry thesis,” the view that ideologically motivated reasoning is distinctive of (or at least disproportionately associated with) conservativism.

The study does, I believe, shed light on (by ripping a fairly decent-sized hole in) the asymmetry thesis. But the actual motivation for and significance of the study lie elsewhere.

The cultural cognition thesis (CCT) holds that individuals can be expected to form risk perceptions that reflect and reinforce their connection to groups whose members subscribe to shared understandings of the best life and the ideal society.

It is opposed to various other accounts of public controversy over societal risks, the most significant of which, in my view, is the bounded rationality thesis (BRT)

Associated most prominently with Kahneman’s account of dual process reasoning, BRT attributes persistent conflict over climate change, nuclear power, gun control, the HPV vaccine, etc. to the public’s over-reliance on rapid, visceral, affect-laden, heuristic reasoning—“System 1” in Kahneman’s terms—as opposed to more deliberate, conscious, analytical reasoning— “System 2,” which is the kind of thinking, BRT theorists assert, that characterizes the risk assessments of scientists and other experts.

BRT is quite plausible—indeed, every bit as plausible, I’m happy to admit—as CCT. Nearly all interesting problems in social life admit of multiple plausible but inconsistent explanations.  Likely that’s what makes them interesting.  It’s also what makes empirical testing—as opposed to story-telling—the only valid way to figure out why such problems exist and how to solve them

In my view, every Cultural Cognition Project study is a contribution to the testing of CCT and BRT.  Every one of them seeks to generate empirical observations from which valid inferences can be drawn that give us more reason than we otherwise would have had to view either CCT or BRT as more likely to be true.

click on it -- you know you can't resist!In one such study, CCP researchers examined the relationship between perceptions of climate change risk, on the one hand, and science literacy and numeracy, on the other. If the reason that the public is confused (that’s one way to characterize polarization) about climate change and other risk issues (we examined nuclear power risk perceptions in this study too) is that it doesn’t know what scientists know or think the way scientists think, then one would expect convergence in risk perceptions among those members of the public who are highest in science literacy and technical reasoning ability.

The study didn’t find that.  On the contrary, it found that members of the public highest in science literacy and numeracy are the most divided on climate change risks (nuclear power ones too).

That’s contrary to what BRT would predict, particularly insofar as numeracy is a very powerful indicator of the disposition to use “slow” System 2 reasoning.

That science literacy and numeracy magnify rather than dissipate polarization is strongly supportive of CCT.  If people are unconsciously motivated to fit their perceptions of risk and comparable facts to their group commitments, then those who enjoy highly developed reasoning capacities and dispositions can be expected to use those abilities to achieve that end.

In effect, by opportunistically engaging in System 2 reasoning, they’ll do an even “better” job at forming culturally congruent perceptions of risk.

Now enter Ideology, Motivated Reasoning, and Cognitive Reflection. The study featured in that paper was aimed at further probing and testing of that interpretation of the results of the earlier CCP study on science literacy/numeracy and climate change polarization.

The Ideology, Motivated Reasoning, and Cognitive Reflection study was in the nature of experimental follow up aimed at testing the hypothesis that individuals of diverse cultural predispositions will use their “System 2” reasoning dispositions opportunistically to form culturally congenial beliefs and avoid forming culturally dissonant ones.

The experiment reported in the paper corroborates that hypothesis.  That is, it shows that individuals who are disposed to use “System 2” reasoning—measured in this study by use of the Cognitive Reflection Test, another performance based measure of the disposition to use deliberate, conscious (“slow”) as opposed to heuristic-driven (“fast”) reasoning—exhibit greater motivated reasoning with respect to evidence that either affirms or challenges their ideological predispositions.

The evidence on which subjects demonstrated motivated reasoning concerned how “closed-minded” and “unreflective” individuals of opposing ideologies are.

Closed mindedness” is a very undesirable trait generally.

It’s also what those on each side of politically polarized debates like the one over climate change identify as the explanation for the other’s refusal to accept what each side sees as the clear empirical evidence in favor of its own position.

One might thus expect individuals who have a stake in forming perceptions of facts congenial to their cultural commitments to react in a defensive way to evidence that those who share their commitments are less “open-minded” and “reflective” than those who harbor opposing commtiments.

So I tested that.  I advised subjects that psychological evidence suggests that the Cognitive Reflection Test measures “open-mindedness” (some psychologists take that position; I actually think they are wrong—as I’ll explain in a moment!).  Members of a control group were told no more than this.  But subjects in two other groups were told either that climate change “skeptics” score higher than climate change “believers” or vice versa.

I found that subjects displayed motivated reasoning with respect to the evidence of the “validity” of the Cognitive Reflection Test as a measure of “open mindedness.” That is, they credited the evidence that the CRT is a “valid” test of “open-mindedness” and “reflection” much more readily if they were advised that individuals who hold the climate-change position consistent with the subjects’ ideologies scored higher, but rejected that evidence when they were informed that those same individuals score lower, than individuals with the opposing position on climate change.

Moreover, this tendency was highest among individuals with the highest Cognitive Reflection Test scores.

That finding is highly inconsistent with BRT, which assumes that a deficit in System 2 reasoning capacities explains the failure of the members of the public to converge on conclusions supported by the best available decision-relevant science.

But it very much consistent with CCT, which predicts that individuals will use their System 2 reasoning capacities strategically and opportunistically to reinforce beliefs that the their cultural group’s positions on such issues reflect the best available evidence and that opposing groups’ positions do not.

It's consistent, too, with a growing collection of findings in political psychology.  This research shows not only that ideologically motivated reasoning drives political polarization (generating perverse effects, e.g., like hardening of commitment to mistaken beliefs when "fact checkers" try to correct false claims), but also that this effect intensifies as individuals become more sophisticated about politics.

Some could have attributed this effect to a convergence between political knowledge and intensity of partisanship.  But the result in my study makes it more plausible to see the magnification of polarization associated with political knowledge as reflecting the tendency of people who simply have a better comprehension of matters political to use their knowledge in an opportunistic way so as to maintain congruence between their beliefs and their ideological identities. (I've addressed before how "cultural cognition" relates to the concept of ideologically motivated reasoning generally, and will even say a bit more on that below.)

As for the asymmetry thesis, the study also found, as predicted, that this tendency was symmetric with respect to right-left ideology.  That’s not what scholars who rely on the “neo-authoritarian personality” literature—which rests on correlations between conservativism and various self-report measures of “open-mindedness”—would likely have expected to see here.

Interestingly, I also found that there is no meaningful correlation between cognitive reflection and conservativism.

The Cognitive Reflection Test is considered a “performance” or “behavioral” based “corroborator” of the self-report tests (like “Need for Cognition,” which involves agreement or disagreement with statements like “I usually end up deliberating about issues even when they do not affect  me personally” and "thinking is not my idea of fun") that are the basis of the neo-authoritarian-personality literature on which “asymmetry thesis” rests.

It has also been featured in numerous studies that show that religiosity, which is indeed negatively correlated with cognitive reflection, predicts greater resistance to engaging evidence that challenges pre-existing beliefs.

Accordingly, one might have expected, if the “asymmetry thesis” is correct, that Cognitive Reflection Test scores would be negatively correlated with conservativism.  Studies based on nonrepresentative samples—ones consisting of M Turk workers or of individuals who visited a web site dedicated to disseminating research findings on moral reasoning style—have reported such a finding.

But in my large, nationally representative sample, scores on the Cognitive Reflection Test were not meaningfully correlated with political outlooks.

Actually, there was a very small positive correlation between cognitive reflection and identification with the Republican Party.  But it was too tiny to be of any consequence for anything as consequentially large as the conflict over climate change.

Moreover, there was essentially zero correlation between cognitive reflection and a more reliable, composite measure of ideology and political party membership.

Because I think the only valid way to test for motivated reasoning is to do do experimental tests that feature that phenomenon, I don’t really care that much about correlations between cognitive style measures and ideology.

But if I were someone who did think that such correlations were important, I’d likely find it pretty interesting that conservativism doesn’t correlate with Cognitive Reflection Test scores.  Because this test is now widely regarded as a better measure of the disposition to engage in critical reasoning than are the variety of self-report measures on which the “asymmetry thesis” thesis literature rests—and, as I said, has been featured prominently in recent studies of the cognitive reasoning style associated with religiosity—the lack of any correlation between it and conservative political outlooks raises some significant questions about exactly what the correlations reported in that literature were truly measuring.

For this reason, I anticipate that “asymmetry thesis” supporters will focus their attention on this particular finding in the study.  Yet it’s actually not the finding that is most damaging to the “asymmetry thesis”; the experimental finding of symmetry in motivated reasoning is!  Indeed, I obviously don’t think the Cognitive Reflection Test—or any other measure of effortful, conscious information processing for that matter—is a valid test of open-mindedness (which isn't to say there might not be one; I'd love to find it!).  But it has been amusing—a kind of illustration of the experiment result itself—to see “asymmetry thesis” proponents, in various responses to the working paper version of the study, attack the the Cognitive Reflection Test as “invalid” as a measure of the sort of “closed mindedness” that their position rests on!

One final note:

The study characterizes differences in individuals’ predispositions with a measure of their right-left political leanings rather than their cultural worldviews. I’ve explained before that “liberal-conservative ideology” and “cultural worldviews” can be viewed as alternative observable “indicators” of the same latent motivating disposition.  I think cultural worldviews are better, but I used political outlooks here in order to maximize engagement with those researchers who study motivatated reasoning in political psychology, including those who are interested in the “asymmetry thesis,” the probing of which was, as indicated, a secondary but still important objective of the study. I have also analyzed the study data using cultural worldviews as the predisposition measure and reported the results in a separate blog post.

Sunday
Jul282013

Weekend update 2: Money talks, bullshit on scientific consensus (including lack thereof) walks

The comment thread following yesterday's "update" on the persistent, and persistently unenlightening, debate over the most recent "97% consensus" study has only renewed my conviction that anyone genuinely interested in helping confused and curious members of the public to assess the significance of the best available evidence on climate change would not be bothering with surveys of scientists but would instead be creating a market index in securities the value of which depends on global warming actually occurring.

I've explained previously how such an index would operate as a beacon of collective wisdom, beaming a signal of considered judgment through a filter of economic self-interest that removes the distorting influence of cultural cognition & like forms of bias.

I just instructed my broker to place an order for $153,252 worth of stocks in firms engaged in arctic shipping. I wonder how many of the people arguing against the validity of the Cook et al. study are shorting those same securities?

 

 

 

Saturday
Jul272013

Weekend update: The distracting, counterproductive "97% consensus" debate grinds on

I don’t want to go back there but since 10's of millions of people get all their news exclusively from this blog (oh, btw, there was a royal baby, everyone, in case any of you care) I felt that I ought to note that controversy continues to attend the Cook et al. study that, “97%” of climate scientists agree that human activity is contributing to climate change.

Studies making materially identical findings have been appearing at regular intervals for the better part of a decade. Every time, they are widely heralded; indeed, the media have been saturated with claims that there is “scientific consensus” on climate change since at least 2006, when Al Gore made that message the centerpiece of a $300-million effort to build public support for policies to reduce carbon emissions in the U.S.

But it is demonstrably the case (I'm talking real-world evidence here) that the regular issuance of these studies, and the steady drum beat of “climate skeptics are ignoring scientific consensus!” that accompany them, have had no—zero, zilch—net effect on professions of public “belief” in human-caused climate change in the U.S.

On the contrary, there’s good reason to believe that the self-righteous and contemptuous tone with which the “scientific consensus” point is typically advanced (“assault on reason,” “the debate is over” etc.) deepens polarization.  That's because "scientific consensus," when used as a rhetorical bludgeon, predictably excites reciprocally contemptuous and recriminatory responses by those who are being beaten about the head and neck with it.

Such a mode of discourse doesn't help the public to figure out what scientists believe. But it makes it as clear as day to them that climate change is an "us-vs.-them" cultural conflict, in which those who stray from the position that dominates in their group will be stigmatized as traitors within their communities.  

This is not a condition conducive to enlightened self-government.

Nevertheless, the authors of the most recent study announced (in a press release issued by the lead author’s university) that “when people understand that scientists agree on global warming, they’re more likely support politics that take action on it,” a conclusion from which the authors inferred that “making the results of our paper widely-known is an important step toward closing the consensus gap and increasing public support for meaningful climate change.”

Unsurprisingly, the study has in the months since its publication supplied a focal target for climate skeptics, who have challenged the methods the authors employ.

It’s silly to imagine that ordinary members of the public can be made familiar with results of particular studies like this.  

But it’s very predictable that they will get wind of continuing controversy over “what scientists believe” so long as advocates keep engaging in impassioned, bitter, acrimonious debates about the validity of studies like this one.

That’s too bad because, again, the best evidence on why the public remains divided on climate change is the surfeit of cues that the issue is one that culturally divides people.  Those cues motivate members of the public to reject any evidence of “scientific consensus” that suggests it is contrary to the position that predominates in their group. Under these circumstances, one can keep telling people that there is scientific consensus on issues of undeniable practical significance, and a substantial proportion of them just won’t believe what one is saying.

The debate over the latest “97%” paper multiplies the stock of cues that climate change is an issue that defines people as members of opposing cultural groups. It thus deepens the wellsprings of motivation that they have to engage evidence in a way that reinforces what they already believe. The recklessness  that the authors displayed in fanning the flames of unreason that fuels this dynamic is what motivated me to express dismay over the new study.

But look: Matters like these are admittedly complex and open to reasonable disagreement. I could be wrong, and I welcome evidence & reasoned argument that would give me reason to revise my views. In the best spirit of scholarly conversation, the lead author of the latest "97%" study, John Cook, penned a very perceptive, engaging, and gracious response--and I urge people to take a look at it & decide for themselves if my reaction was well-founded.

So what’s the new development?

Mike Hulme, a climate scientist who is famous for his own conjectures about public conflict over climate change has apparently added his voice to the chorus of critics.

I say apparently because the comments attributed to Hulme appear in a short on-line comment on a blog post that described an interview of the UK Secretary of State for Energy and Climate Change. I assume Hulme must be the actual author of the comment because no one seems to be challenging that and he hasn’t disavowed it. 

Anyway, in the comment, Hulme (assuming its him!) acidly states:

Needless to say, the comment—because it comes from a figure of significant stature among proponents of aggressive policy engagement with the risks posed by climate change—has lifted the frenzy surrounding the latest “97%” study to new heights (most noticeably in dueling twitter posts, a form of exchange more suited for playground-style taunting than serious discussion).

What to say?

First, what a sad spectacle.  Honestly, it’s hard for me to conceive of an issue that could be further removed from the important questions here—ones involving what the best empirical evidence reveals about climate change and about the pathologies that make public debate impervious to the same—than whether the latest “97%” study is “sound.”

Second, I think Hulme’s frustration, while probably well-founded, is not as well articulated as it should be.  What exactly does he mean, e.g., when he says “public understanding of the climate issue has moved on”?  The statement admits of myriad interpretations, many of which would be clearly false (such as that polarization in the U.S., e.g., has abated). 

Of course, it's not reasonable to expect perfect clarity or cogency in 5-sentence blog comment. Hulme has written a very thoughtful essay in which he presents an admirably clear and engaging case against trying to buy public consensus in the currency of appeals to the authority of "scientific consensus." His argument is founded on the manifestly true point that science's way of knowing consists neither in nose counting nor appeals to authority--and to proceed as if that weren't so demeans science and makes the source of the argument look like a fool.

My position is slightly different from his, I think.

I'd say it makes perfect sense for the public to try to give weight to what they perceive to be the dominant view on decision-relevant science. Indeed, it's a a form of charming but silly romanticism to think that ordinary members of the public should "take no one's word for it" (nullius in verba) but rather try to figure out for themselves who is right when there are (as is inevitably so) debates over decision-relevant science.

Members of the public are not experts on scientific matters. Rather they are experts in figuring out who the experts are, and in discerning what the practical importance of expert opinion is for the decisions they have to make as individuals and citizens.  

Ordinary citizens are amazingly good at this.  Their use of this ability, moreover, is not a substitute for rational thought; it is an exercise rational thought of the most impressive sort.

But in a science communication environment polluted with toxic partisan meanings, the faculties they use to discern what most scientists believe are impaired.

The problem with the suggestion of the authors' of the latest "97%" study that the key is to "mak[e] the results of [their] paper widely-known" is that it diverts serious, well-intentioned people from efforts to clear the air of the toxic meanings that impede the processes that usually result in public convergence on the best available (and of course always revisable!) scientific conclusions about people can protect themselves from serious risks.

Indeed, as I indicated, the particular manner in which the "scientific consensus" trope is used by partisan advocates tends only to deepen the toxic fog of cultural conflict that makes it impossible for ordinary citizens to figure out what the best scientific evidence is. 

Meanwhile, time is “running out.”  On what? Maybe on the opportunity to engage in constructive policies on climate change.

But more immediately, time is running out on the opportunity to formulate a set of genuinely evidence-based strategies for promoting constructive engagement with the IPC’s 5th Assessment, which will be issued in installments beginning this fall. It will offer an authoritative statement of best current evidence on climate change. 

Much of what it has to say, moreover, will consist in important revisions and reformulations of conclusions contained in the 4th Assessment.

That’s inevitable; it is in the nature of science for all conclusions to be provisional, and subject to revision with new evidence.

In the case of climate change, moreover, revised assessments and forecasts can be expected to occur with a high degree of frequency because the science involved consists in iterative modeling of complex, dynamic systems—a strategy for advancing knowledge that (as I’ve discussed before) self-consciously contemplates calibration through a process of prediction & error-correction carried out over time.

My perspective is limited, of course. But from what I see, it is becoming clearer and clearer that those who have dedicated themselves to promoting public engagement with the best available scientific evidence on climate change are not dealing with the admittedly sensitive and challenging task of explaining why it is normal, in this sort of process, to encounter discrepancies between forecasting models and subsequent observations and to adjust the models based on them.  And why such adjustment in the context of climate change is causefor concluding neither that “the science was flawed” nor that “there is in fact nothing for anyone to be concerned about.”

Part of the evidence, to me, that they aren’t preparing to do this is how much time they are wasting instead debating irrelevant things like whether “97%” of scientists believe a particular thing.

p.s. Please don’t waste your & readers’ time by posting comments saying (a) that I am arguing there isn’t scientific consensus on issues of practical significance on climate change (I believe there is); (b) that I think it is “unimportant” for the public to know that (it’s critical that that it be able to discern this); or (c) that I am offering up no “alternative” to continuing to rely on a strategy that I say doesn’t work (not true; but if it were-- then what? I should nod approvingly if you propose that we all resort to prayer, too?).  Not only are none of these things either stated or implied in what I’ve written. They are mistakes that I’ve corrected multiple times (e.g., here, here, here . . .).

 

 

Friday
Jul262013

Dual process reasoning, Liberalism, & disgust

Interesting discussion ongoing in connection with Yoel Inbar's guest post Is Disgust a Uniquely "Conservative" Moral Emotion? I think the contributions made to it so far are more interesting than anything I have to say today, and I am loath to preempt additional contributions to that discussion. So today is an official "more discussion" day.

But just to give a sense of the nature of the matters being discussed, among the interesting questions that came up (in an exchange w/ Inbar initiated by Jon Baron)  is the relationship between the "disgust is conservative" thesis (DIC) and dual-process reasoning theories (DRT) in moral psychology.  Consider two possibilities:

A. The two could be combined. E.g., one could take the view (1) that moral reasoning is reliable & valid only when it is guided either exclusively by conscious reflection or by intuitive sensibilities including emotions the content of which would be validated by reflection; (2) that disgust is unreliable because either unreflective or, on reflection, not valid because on reflection not susceptible to validation by a normatively defensible moral theory; and (3) disgust is characteristically "conservative" either b/c conservatism is associated with a cognitive style hostile to cognitive reflection or b/c disgust involves moral appraisals that on reflection are "conservative"--or, more interestingly, illiberal in the sense of being antagonistic to key premises of Liberalism understood in the political philosophical sense.

B. Alternatively, one could separate DIC from DRT.  The validity of moral reasoning, on this account, doesn't depend on it involving or being validated by reflection. Indeed, one might believe that emotions and other "automatic," "intuitive," "unconscious," "perceptive" etc. forms of cognition play some indispensable role in moral reasoning-- a role that can't be reproduced by conscious reflection, etc. On this view, then, diverse moral styles would be distinguished not by the degree of reflection they involve, necessarily, but by the nature of the appraisals that are embodied in the emotions that those who subscribe to them use to size up goods and states affairs.  "Disgust" would be "conservative," this account would say, insofar as "disgust" reliably guides appraisals to the ones that fit the "conservative" moral style. But "liberals" would then be understood to be relying on some alternative emotion or set of emotions calibrated to generating "liberal" perceptions and related affective stances toward those same goods and states of affairs

Baron, as I understood him, was taking issue with Inbar on the assumption that Inbar subscribed to something like position A.  Inbar replied that he was somewhere closer to B -- or at least that he thought "liberals" as well as "conservatives" were relying on emotion to the same extent in their reasoning; he expressed uncertainty as to whether emotion is simply a heuristic substitution for reflection in moral reasoning or a unique and indispensable ingredient of it.

I had tried to identify scholars who clearly are committed to either A or B.  I proposed Martha Nussbaum for B.  For A, I suggested maybe John Jost, although in fact he has not (as far as I know) written about disgust. I suggested that I saw Haidt as sometimes A & sometimes B, although Inbar offered that he viewed Haidt as pretty clearly in the camp B.

As it turns out, I happened to read an excellent article yesterday that is pure, unadulterated A


We review evidence that disgust, in the context of bodily moral violations, differs from other emotions of moral condemnation, particularly anger, in three different senses of the word unreasoning. First, bodily moral disgust is weakly associated with situational appraisals, such as whether a behavior is harmful or justified.Instead, it tends to be based on associations with a category of object or act; certain objects are just disgusting. Second, bodily moral disgust is relatively insensitive to context, both in thoughts and behaviors, and therefore disgust is less likely to change from varying contexts. Third, bodily moral disgust is less likely to be justified with external  reasons; instead, persons often use their feelings of disgust as a tautological justification. These unreasoning traits can make disgust a problematic sociomoral emotion for a liberal society because it ignores factors that are important to judgments of fairness, such as intentionality, harm, and justifiability.

Very much worth reading! And further evidence, as Inbar emphasized in his excellent post, that debate in this area remains vibrant and ongoing.

There were other interesting issues under debate too, including regular commentator Larry's surmise that disgust is a kind of feigned strategic posturing on the part of "liberals."

I propose that additional comments -- I hope there will be some! -- be added to the existing trail originating in Yoel's post.

Wednesday
Jul242013

"Integrated & reciprocal": Dual process reasoning and science communication part 2

This is the second in what was to be a two-part series on dual process reasoning and science communication.  Now I’ve decided it must be three!

In the first, I described a conception of dual process reasoning that I don’t find compelling. In this one, I’ll describe another that I find more useful, at least for trying to make sense of and dispel the science communication problem. What I am planning to do in the 3rd is something you’ll find out if you make it to the end of this post.

A brief recap (skip down to the red type below if you have a vivid recollection of part 1):

Dual process theories (DPT) have been around a long time and come in a variety of flavors. All the various conceptions, though, posit a basic distinction between information processing that is largely unconscious, automatic, and more or less instantaneous, on the one hand, and information processing that is conscious, effortful, and deliberate, on the other. The theories differ, essentially, over how these two relate to one another.

In the first post I criticized one conception of DPT, which I designated the “orthodox” view to denote its current prominence in popular commentary and synthetic academic work relating to risk perception and science communication.

The orthodox conception, which reflects the popularity and popularization of Kahneman’s appropriately influential work, sees the “fast,” unconscious, automatic type of processing—which it refers to as “System 1”—as the default mode of processing.

System 1 is tremendously useful, to be sure. Try to work out the optimal path of evasion by resort to a methodical algorithm and you’ll be consumed by the saber-tooth tiger long before you complete your computations (etc).

But System 1 is also prone to error, particularly when used for assessing risks that differ from the ones (like being eaten by saber-tooth tigers) that were omnipresent at the moment of our evolutionary development during which our cognitive faculties assumed their current form.

Our prospects for giving proper effect to information about myriad modern risks—including less vivid and conspicuous but nevertheless highly consequential ones, like climate change; or more dramatic and sensational but actuarially less significant ones like those arising from terrorism or from technologies like nuclear power and genetically modified foods the benefits of which might be insufficiently vivid to get System 1’s attention—depends on our capacity, time, and inclination to resort to the more effortful, deliberate, “slow” kind of reasoning, which the orthodox account labels “System 2.”

This is the DPT conception I don’t like.

I don’t like it because it doesn’t make sense.

The orthodox position’s picture of “reliable” System 2 “monitoring” and “correcting” “error-prone” System 1 commits what I called the “System 2 ex nihilo fallacy”—the idea that System 2 crates itself “out of nothing” in some miraculous act of spontaneous generation.

Nothing makes its way onto the screen of consciousness that wasn’t instants earlier floating happily along in the busy stream of unconscious impressions.  Moreover, what yanked it from that stream and projected it had to be some unconscious mental operation too, else we face a problem of infinite regress: if it was “consciously” extracted from the stream of unconsciousness, something unconscious had to tell consciousness to perform that extraction.

I accept that the sort of conscious reflection on and re-assessment of intuition associated with System 2 truly & usefully occurs.  But those things can happen only if something in System 1 itself—or at least something in the nature of a rapid, automatic, unconscious mental operation—occurs first to get System 2's attention.

So the Orthodox DPT conception is defective. What’s better?

I will call the conception of DPT that I find more compelling “IRM,” which stands for the “integrated, reciprocal model."

The orthodox conception sees “System 1” and “System 2” as discrete and hierarchical.  That is, the two are separate, and System 2 is “higher” in the sense of more reliably connected to sound information processing.

“Discrete and hierarchical” is clearly how Kahneman describes the relationship between the two modes of information processing in his Nobel lecture.

For him, System 1 and 2 are "sequential": System 1 operations automatically happen first; System 2 ones occur next, but only sometimes. So the two are necessarily separate. 

Moreover, what System 2 does when it occurs is check to see if System 1 has gotten it right. If it hasn’t, it “corrects” System 1’s mistake. So System 2 “knows better,” and thus sits atop the hierarchy of reasoning processes within an ordering that ranks their contribution to rational thought.

IRM sees things differently. It says that “rational thought” occurs as a result of System 1 and System 2 working together, each supplying a necessary contribution to reasoning. That’s the integrated part.

Moreover, IRM posits that the ability of each to make its necessary contribution is dependent on the other’s contribution. 

As the “System 2 ex nihilo” fallacy helps us to see, conscious reflection can make its distinctive contribution only if summoned into action by unconscious, automatic System 1 processes, which single out particular unconscious judgments as fit for the sort of interrogation that System 2 is able uniquely to perform.

But System 1 must be seletctive:  there are far too many unconscious operations going on for all to be monitored, much less forced onto the screen of conscious tought, which would be overwhelmed by such indiscriminate summoning! But in being selective, it has to pick out the "right" impressions for attention & not ignore the ones unreflective reliance on which would defeat an agent's ends.  

How does System 1 learn to perform this selection function reliably? From System 2, of course.

The ability to perform the valid conscious reasoning that consists in making valid inferences from observation, and the experience of doing so regularly, are what calibrate unconscious processes, and train them to select some for the attention System 2, which is then summoned to attend to them. 

When it is summoned, moreover, System 2 does exactly what the orthodox view imagines: it checks and corrects, and on the basis of mental operations that are indeed more likely to get the “right” answer than those associated with System 1.  That event of correction will itself conduce to the calibration and training of System 1.

That’s the reciprocal part of IRM: System 2 acts on the basis of signals from System 1, the capacity of which to signal reliably is trained by System 2.

I do not by any means claim to have invented IRM!  I am synthesizing it from the work of many brilliant decision scientists.

The one who has made the biggest contribution to my view that IRM, and not the Orthodox conception of DRT, is correct is the brilliant social psychologist Howard Margolis.

Margolis presented an IRM account, as I’ve defined it, in his masterful trilogy (see the references below) on the role that “pattern recognition” makes to reasoning. 

Pattern recognition is a mental operation in which a phenomenon apprehended via some mode of sensory perception is classified on the basis of a rapid, unconscious process that assimilates the phenomenon to a large inventory of “prototypes” (“dog”; “table”; “Hi, Jim!”; “losing chess position”; “holy shit—those are nuclear missile launchers in this aerial U2 reconaisance photo! Call President Kennedy right away!” etc).

For Margolis, every form of reasoning involves pattern recognition.  Even when we think we are performing conscious, deductive or algorithmic mental operations, we are really just manipulating phenomena in a manner that enables us to see the pattern in the manner requisite to an accurate and reliable form of unconscious prototypical classification. Indeed, Margolis ruthlessly shreds theories that identify critical thinking with conscious, algorithimic or logical assessment by showing that they reflect the incoherence I've described as the "System 2 ex nihilo fallacy."

Nevertheless, how well we perform pattern recognition, for Margolis, will reflect the contribution of conscious, algorithmic types of reasoning.  The use of such reasoning (particularly in collaboration with experienced others, who can vouch through the use of their trained pattern-recognition sensibilities that we are arriving at the “right” result when we reason this way) stocks the inventory of prototypes and calibrates the unconscious mental processes that are used to survey and match them to the phenomena we are trying to understand.

As I have explained in a previous post (one comparing science communication and “legal neutrality communication”), this position is integral to Margolis’s account of conflicts between expert and lay judgments of risk. Experts, through a process that involves the conscious articulation and sharing of reasons, acquire a set of specialized prototypes, and an ability reliably to survey them, suited to their distinctive task. 

The public necessarily uses a different set of prototypes—and sees different things—when it views the same phenomena.  There are bridging forms of pattern recognition that enable nonexperts to recognize who the “experts” are—in which case, the public will assent to the experts’ views (their “pictures,” really).  But sometimes the bridges collapse; and there is discord.

Margolis’s account is largely (and brilliantly) synthetic—an interpretive extrapolation from a wide range of sources in psychology and related disciplines.  I don’t buy it in its entirety, and in particular would take issue with him on certain points about the sources of public conflict on risk perception.

But the IRM structure of his account seems right to me.  It is certainly more coherent—because it avoids the ex nihilo fallacy—than the Orthodox view.  But it is also in better keeping with the evidence. 

That evidence, for me, consists not only in the materials surveyed by Margolis.  They include too work by contemporary decision scientists.

The work of some of those decision scientists—and in particular that of Ellen Peters—will be featured in Part 3.

I will also take up there what is in fact the most important thing, and likely what I should have started with: why any of this matters.

Any “dual process theory” of reasoning will necessarily be a simplification of how reasoning “really” works.

But so will any alternative theory of reasoning or any theory whatsoever that has any prospect of being useful.

Better than simplifications, we should say such theories are, like all theories in science, models of phenomena of interest.

The success of theories as models doesn’t depend on how well they “correspond to reality.”  Indeed, the idea that that is how to assess them reflects a fundamental confusion: the whole point of “modeling” is to make tractable and comprehensible phenomena that otherwise would be too complex and/or too remote from straightforward ways of seeing to be made sense of otherwise.

The criteria for judging the success of competing models of that sort are pragmatic: How good is this model relative to that one in allowing us to explain, predict, and formulate satisfying prescriptions for improving our situation?

In Part 3, then, I will also be clear about the practical criteria that make IRM conception so much more satisfying than the Orthodox conception of dual process reasoning.

Those criteria, of course, are ones that reflect my interest (and yours; it is inconceivable you have gotten this far otherwise) in advancing the scientific study of science communication--& thus perfecting the Constitution of the Liberal Republic of Science

References 

Tuesday
Jul232013

Is Disgust a Uniquely "Conservative" Moral Emotion?

As the 14 billion regular readers of this blog know, I went through a period where I was obsessed with disgusting things. Not incest or coprophagia, or any of that mundane stuff but rather things like the "Crickett," the miniaturized but fully functional .22 rifle that is marketed under the logo "My first rilfe!," and that is intended to be purchased by parents for preadolsecent children (they come in a variety of styles featuring child-attractive motifs, like pink-colored laminated stocks meant to appeal to young girls) in order to introduce them to the wonders of a cultural style in which guns are symbols of shared commitments and also instruments or tools that enable various sorts of role-specific behavior that transmit and propagate commitment to that style.... People who harbor an opposing style say they are disgusted by the Crickett--and I see (feel) where they are coming from.  That place, moreover, is very remote from "conservative" political ideology or a "conservative" moral style, which Jonathan Haidt and others have identified in extremely important and appropriately influential work as uniquely (or at least disproportionately) associated with the use of "disgust" as a moral sensibility. Rather, they seem like the people who subscribe to the "liberal" moral style that, in the work of Haidt and others, makes no or at least less use of disgust as a form of moral appraisal and instead relies on perceptions of harm. The reaction to the Crickett--that it and the way of life in which it figures are disgusting (a reaction widely expressed in the aftermath of the widely covered tragic accidental shooting of a two-year old Kentucky girl by her Crickett-toting five-year old brother), seemed like evidence to me for a different position, one I associate with Mary Douglas and William Miller, who view disgust as a universal moral sensibility that adherents to diverse cultural systems across place and time make use of to focus their perception of the objects and behavior characteristic of opposing styles; and to motivate their denunciation of them, in terms that are strikingly illiberal in the sense of being disconnected to harm, which is imputed to behavior that offends the cultural norms of those experiencing this reaction...

Readers also know that one of my favorite strategies for advancing my own knowledge and that of others is to recklessly offer my own conjectures on matters such as this as a way of luring/provoking those who know more to respond & correct the myriad mistakes they see in my ruminations!  Well, I've succeed once again!  

Below is an amazingly thoughtful & penetrating response from Yoel Inbar. Inbar is a social psychologist whose work on disgust, which is broadly in alignment with the account I attributed to Haidt, is of tremendous quality and importance and central to ongoing scholarly discussion of the role of disgust in informing moral and related sensibilities.  He takes issue with me, of course! I am much smarter as a result of reading and thinking about his essay & offer it to my loyal readers so that they can enjoy the same benefit!

Is Disgust a Uniquely "Conservative" Moral Emotion?

Yoel Inbar

Yoel Inbar (left)Among politically liberal academics, the emotion of disgust has an unsavory reputation. The philosopher Martha Nussbaum has argued that disgust is wielded by privileged social groups to marginalize and dehumanize those of lower status, and indeed research has found that the disgust-prone are more negative towards immigrants, foreigners, and "social deviants." Furthermore, disgust seems to have a relationship with political conservatism: self-described political conservatives are more easily disgusted, and states where people are on average more disgust sensitive were (all else equal) more likely to go for McCain over Obama in the 2008 U.S. presidential election. A tempting conclusion for liberals might be that disgust is an irrational, immoral, and politically suspect emotion, at least when it is applied to morality. 

Yet the view that disgust as a moral emotion is only important to political conservatives has a problem: on its face, it seems obviously wrong. As Dan Kahan pointed out on this blog, political liberals often use the word "disgust" when talking about things they find immoral: liberals say they are disgusted by multi-million-dollar Wall Street bonuses, gun manufacturers who make weapons for 10-year-olds, racism, and lots of other things. Doesn't this mean that liberals are just as likely as conservatives to base their moral judgments on disgust? Perhaps (liberal) researchers are simply more likely to label moral positions that they disagree with as disgust-based (and therefore, by implication, irrational) while giving positions they agree with a free pass.

Although political bias in social psychology is a real problem, this objection misses a crucial difference between liberals and conservatives, namely what they find morally objectionable. There are some behaviors that are at least in theory harmless, but (for a lack of a better word) gross. For example, consider a man who, every Saturday, buys a whole chicken at the supermarket, masturbates into it, cooks it, and eats it for dinner (this wonderful and by now famous story was invented by Jon Haidt). Almost everyone finds this disgusting. However, most liberals will concede that despite being disgusting, having sex with a chicken and consuming it is not morally wrong, because no one is harmed (after all, the chicken is already dead). Many conservatives (although by no means all) will say that despite being harmless, this behavior is wrong--because it is disgusting. In fact, conservatives are more likely than liberals to say that many different kinds of disgusting-but-harmless behaviors are morally wrong. Unusual habits regarding food, hygiene, and (especially) sex are often seen by conservatives as immoral regardless of whether they directly harm anyone. And the emotion that people feel when contemplating these kinds of behaviors (which Haidt and his colleagues have called purity violations) is disgust. Certainly Western liberals may also feel disgusted when considering these behaviors, but they are often reluctant to call them immoral unless they can point to a victim--to someone who is directly harmed.

Of course, many people who morally object to (for example) certain kinds of sex between consenting adults claim that their objection is motivated by the putative harm caused by the behavior, not by the observer's queasy feelings. In such a case, how are we to know whether beliefs about harm caused the moral conviction, or whether they are merely post-hoc rationalizations of a (disgust-based) moral intuition? This is a difficult question, but there are several good reasons to think the latter answer is right: 1) When Jon Haidt and his collaborator, Matthew Hersh, asked liberals and conservatives to defend their views about the moral permissibility of anal sex between two men, conservatives but not liberals were likely to defend their beliefs even when they admitted they could not give (harm-based) justifications for them (a phenomenon Haidt has called moral dumbfounding); 2) in the same study, judgments of moral permissibility were statistically predicted by subjects' self-reported emotional reactions to imagining the acts in question, and not by their judgments of their harmfulness; 3) when people are asked directly about how much different considerations are relevant to deciding whether something is right or wrong, conservatives rate "whether someone violated standards of purity and decency" and "whether or not someone did something disgusting" as more morally relevant than do liberals.

What, then, of liberals who say they're disgusted by gun manufacturers or Goldman Sachs? Well, it turns out that "disgust" is a tricky term, at least in English--many laypeople use "disgusted" in a metaphorical sense, to mean "angry." As David Pizarro and I recently argued with one or two exceptions there's very little evidence that people are physically disgusted by immoral behavior that doesn't involve food, cleanliness, or sex. In fact, recent research by Roberto Gutierrez, Roger Giner-Sorolla, and Milica Vasiljevic suggests that people use the word "disgust" to mean physically disgusted when judging unusual sexual or dietary practices, but use the same word to mean something much closer to "angry" when judging instances of deceit or exploitation.  Of course, this is an area that's actively being researched at the moment, and this may change, but the balance of evidence so far suggests that when people use "disgust" to refer to their reactions to unfairness, exploitation, or violations of someone's rights, they are doing so metaphorically, not literally.

This is not to say that disgust qua disgust plays no role in liberals' moral judgments. For example, consider another story invented by Jon Haidt: Mark and Julie are siblings who are vacationing together in the south of France. One night, they decide that it would be fun and interesting if they tried making love. Julie is on birth control, but just to be safe Mark also uses a condom. They both enjoy the experience, but they decide not to do it again and to keep it a special secret between the two of them. Was this morally wrong? Here, liberals and conservatives seem equally likely to say "yes"--and equally unable to back up those judgments with harm-based justifications. When Jon Haidt and Matthew Hersh asked their undergraduate subjects about the moral permissibility of incest, they found that liberals were just as likely as conservatives to reject it, and just as likely to become morally dumbfounded when attempting to defend their judgments. For both liberals and conservatives, visceral disgust sometimes leads to moral revulsion, but this seems to be more common for conservatives. This is likely to be for two reasons: 1) conservatives are more readily disgusted in general; and 2) conservatives seem to be more comfortable pointing to feelings of disgust as a justification for moral beliefs (for example, conservative bioethicist Leon Kass's well-known argument for the "wisdom of repugnance."

Does this mean that liberals are better moral decision-makers than conservatives? After all, if conservatives base more of their moral judgments on disgust, an unreasoned emotion, and liberals base more of their moral judgments on whether someone was harmed or treated unfairly, doesn't this mean that liberals are more careful, thoughtful, and reasoned in their moral judgments? The answer is unambiguously no. There is no evidence that liberals are any less likely to base their moral judgments on (unreasoned) intuitions than conservatives, although liberals and conservatives do often rely on different moral intuitions. But what moral intuitions underlie the moral judgments of political liberals, and why these intuitions can be just as fallible as those of conservatives, are questions big enough to leave for a separate post.

Page 1 ... 7 8 9 10 11 ... 24 Next 20 Entries »