follow CCP

Recent blog entries
Sunday
Jul012012

A not so "tasty" helping of pollution for the science communication environment -- at the local grocery store

Compliments of a colleague, who snapped this photo in New Haven food market.

Keith Kloor has been writing perceptively on the anti-GMO campaign recently (here & here, e.g.), as has David Tribe amidst his regular enlightening posts on all matters GMO & GMO-related.

 

Sunday
Jul012012

The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)  

This is post no. 2 on the question “Is cultural cognition a bias,” to which the answer is, “nope—it’s not even a heuristic; it’s an integral component of human rationality.”

Cultural cognition refers to the tendency of people to conform their perceptions of risk and other policy-consequential facts to those that predominate in groups central to their identities. It’s a dynamic that generates intense conflict on issues like climate change, the HPV vaccine, and gun control.

Those conflicts, I agree, aren’t good for our collective well-being. I believe it’s possible and desirable to design science communication strategies that help to counteract the contribution that cultural cognition makes to such disputes.

I’m sure I have, for expositional convenience, characterized cultural cognition as a “bias” in that context. But the truth is more complicated, and it’s important to see that—important, for one, thing, because a view that treats cultural cognition as simply a bias is unlikely to appreciate what sorts of communication strategies are likely to offset the conditions that pit cultural cognition against enlightened self-government.

In part 1, I bashed the notion—captured in the Royal Society motto nullius in verba, “take no one’s word for it”—that scientific knowledge is inimical to, or even possible without, assent to authoritative certification of what’s known.

No one is in a position to corroborate through meaningful personal engagement with evidence more than a tiny fraction of the propositions about how the world works that are collectively known to be true. Or even a tiny fraction of the elements of collective knowledge that are absolutely essential for one to accept, whether one is a scientist trying to add increments to the repository of scientific insight, or an ordinary person just trying to live.

What’s distinctive of scientific knowledge is not that it dispenses with the need to “take it on the word of” those who know what they are talking about, but that it identifies as worthy of such deference only those who are relating knowledge acquired by the empirical methods distinctive of science.

But for collective knowledge (scientific and otherwise) to advance under these circumstances, it is necessary that people—of all varieties—be capable of reliably identifying who really does know what he or she is talking about.

People—of all varieties—are remarkably good at doing that. Put 100 people in a room and tell them to perform, say, a calculus problem and likely one will genuinely be able to solve it and four mistakenly believe they can.  Let the people out 15 mins later, however, and it’s pretty likely that all 100 will know the answer. Not because the one who knew will have taught the other 99 how to do calculus. But because that’s the amount of time it will take the other 99 to figure out that she (and none of the other four) was the one who actually knew what she was talking about.

But obviously, this ability to recognize who knows what they are talking about is imperfect. Like any other faculty, too, it will work better or worse depending on whether it is being exercised in conditions that are congenial or uncongenial to its reliable functioning.

One condition that affects the quality of this ability is cultural affinity. People are likely to be better at “reading” people—at figuring out who really knows what about what—when they are interacting with others with whom they share values and related social understandings. They are, sadly, more likely to experience conflict with those whose values and understandings differ from theirs, a condition that will interfere with transmission of knowledge.

As I pointed out in the last post, cultural affinity was part of what enabled the 17th and early 18th Century intellectuals who founded the Royal Society to overturn the authority of the prevailing, nonempiricial ways of knowing and to establish in their stead science’s way. Their shared values and understandings underwrote both their willingness to repose their trust in one another and (for the most part!) not to abuse that trust. They were thus able to pool, and thus efficiently build on and extend, the knowledge they derived through their common use of scientific modes of inquiry.

I don’t by any means think that people can’t learn from people who aren’t like them. Indeed, I’m convinced they can learn much more when they are able to reproduce within diverse groups the understandings and conventions that they routinely use inside more homogenous ones to discern who knows what about what. But evidence suggests that the processes useful to accomplish this widening of the bonds of authoritative certification of truth are time consuming and effortful; people sensibly take the time and make the effort in various settings (in innovative workplaces, e.g., and in professions, which use training to endow their otherwise diverse individuals with shared habits of mind). But we should anticipate that the default source of "who knows what about what" will for most people most of the time be communities whose members share their basic outlooks.

The dynamics of cultural cognition are most convincingly explained, I believe, as specific manifestations of the general contribution that cultural affinity makes to the reliable, every-day exercise of the ability of individuals to discern what is collectively known.  The scales we use to measure cultural worldviews likely overlap with a large range of more particular, local ties that systematically connect individuals to others with whom they are most comfortable and most adept at exercising their “who knows what they are talking about” capacities.

Normally, too, the preference of people to use this capacity within particular cultural affinity groups works just fine.

People in liberal democratic societies are culturally diverse; and so people of different values will understandably tend to acquire access to collective knowledge within a large number of discrete networks or systems of certification. But for the most part, those discrete cultural certification systems can be expected to converge on the best available information known to science. This has to be so; for no cultural group that consistently misled its members on information of such vital importance to their well-being could be expected to last very long!

The work we have done to show how cultural cognition can polarize people on risks and other policy-relevant facts involve pathological cases. Disputes over matters like climate change, nuclear power, the HPV vaccine, and the like are pathological both in the sense of being bad for people—they make it less likely that popularly accountable institutions will adopt policies informed by the best available information—and in the sense of being rare: the number of issues that admit of scientific investigation that generate persistent divisions across the diverse networks of cultural certification of truth are tiny in relation to the number that reflect the convergence of those same networks.

An important aim of the science of science communication is to understand this pathology. CCP studies suggest that they arise in cases in which facts that admit of scientific investigation become entangled in antagonistic cultural meanings—a condition that creates pressures (incentives, really) for people selectively to seek out and credit information conditional on it supporting rather than undermining the position that predominates in their own group.

It is possible, I believe, to use scientific methods to identify when such entanglements are likely to occur, to structure procedures for averting such conditions, and to formulate strategies for treating the pathology of culturally antagonistic meanings when preventive medicine fails. Integrating such knowledge with the practice of science and science-informed policymaking, in my opinion, is vital to the well-being of liberal democratic societies.

But for the reasons that I’ve tried to suggest in the last two posts, this understanding of what the science of science communication can and should be used to do does not reflect the premise that cultural cognition is a bias. The discernment of “who knows what about what” that it enables is essential the ability of our species to generate scientific knowledge and for individuals to participate in what is known to science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

We need to use science to protect the science communication environment from toxins that disable us from using faculties integral to our rationality. An essential step in the advance of this science is to overcome simplistic pictures of what our rationality consists in. 

Part 1

Saturday
Jun302012

What I have to say about Chief Justice Roberts, and how I feel, the day after the day after the health care decision

Gave my talk at the D.C. Circuit Conference.  Slides here.

The Chief Justice didn’t arrive until the break between my session and his. Hey—the guy deserves to sleep in on the first day after the end of a tough Term.

I wouldn't have said exactly this had he been there, but I will say now that I feel a sense of admiration for, and gratitude toward, him.  I also feel impelled to say that in reflecting on this feeling I find myself experiencing a certain measure of anxiety--about myself.

The gratitude/admiration is not for Roberts’s supplying the decisive vote in the Affordable Care Act case, although in fact I was very pleased by the outcome.

It is for the contribution his example makes to sustaining a vital and contested understanding of the legal profession and of law generally.

Roberts in his confirmation famously likened being a judge to being “an umpire.”

Judges saying what the law is must routinely employ forms of intellectual agency that umpires needn’t (shouldn’t) use in “calling balls and strikes.” But it’s not wrong to see judges as obliged in the same way umpires are to be neutral. Not at all.

There are comic-book conceptions of neutrality that are appropriately dismissed for characterizing as simple a form of practical reason that often demands acknowledging moral complexity.

There are sophisticated critiques of neutrality that are also appropriately dismissed for assuming the type of impartiality citizens expect of judges deciding cases is theoretically intricate rather than elemental and ordinary.

But to say that judicial neutrality is both meaningful and possible is not to say that it can be taken for granted. For one thing, it involves craft; legal training consists in large part of equipping people with the habits of mind and dispositions necessary for them to make reliable use of the tools that our legal regime (its doctrines and procedures) furnishes for assuring that the competing interests of citizens are reconciled in a manner that is meaningful neutral with respect to their diverse conceptions of the best way to live.

 Yet even when that craft is performed in an expert way, judicial neutrality is immensely precarious. This is so because meaningfully and acceptably neutral decisions do not certify their own neutrality, any more than valid science certifies its own validity, in the eyes of the public.

Communicating neutrality is a different thing altogether from deciding cases neutrally, and the legal system is at this moment in much more need of insight into how to achieve the former than the latter. Members of the profession—including judges, lawyers, and legal scholars—should collaborate to create that insight by scientific means. That was what I was planning to say to Chief Justice Roberts—and was what I said to the (friendly and spirited) audience of judges and lawyers who got up so early to listen to me at their retreat.

But however ample the stock of knowledge for communicating neutrality is, it will be of no use without real and concrete examples. Comprehension is possible only with instances of excellence, which not only supply the focus for common discussion but also the models--the prototypes--that guide professionalized perception.

Chief Justice Roberts gave us a model on Thursday.

I don’t mean to say that was what he was trying to do—indeed, it would demean his craft skill to say that he meant to do anything other than decide. But the situation created the conditions for him to generate a distinctively instructive and inspiring example of neutral judging, one that will itself now supply a potent resource for a legal culture that perpetuates itself through acculturation of its members.

One of those elements was the surprise occasioned by the difference between what we know of Chief Justice Roberts’s jurisprudential orientation and the outcome he reached.  That’s something that should make it obvious to us that he must have surprised himself in the course of reasoning about the case. If it's not possible for someone to reason to a conclusion that jarringly surprises him- or herself, then such a person doesn’t really know how to be neutral.

Another element was the predictable sense of dismay that his decision generated in others who share many of Chief Justice Roberts’s commitments, moral and political as well as professional. What makes this so extraordinarily meaningful, moreover, has nothing at all to do with the exercise of “restraint” understood as some sort of willful resistance to temptation.

It has to do with habits of mind. Our cultural commitments simultaneously supply us with materials necessary to make sense of the world and expose us to strong forms of pressure to understand it in ways that can be partial, and sometimes even false in light of other aims and roles that define us.

It is part of the mission of legal training to supply habits of mind and dispositions of character that enable a decisionmaker to find insight elsewhere when judging, and to see when the way of making sense of the world that is cultural is inimical to the way of making sense of it that liberalism demands of a state official in reconciling the interests of people of diverse cultural identities. The way in which Chief Justice Roberts used these habits of mind and relied on these dispositions also makes his decision exemplary.

A final condition that makes Chief Justice Robert’s decision such a rich instance of the neutral judging is the position President Obama, when he was a Senator, took on Roberts’s confirmation. Obama, of course, voted against Roberts on grounds that were, candidly, political in nature: “I want to take Judge Roberts at his word that he doesn’t like bullies and he sees the law and the Court as a means of evening the playing field between the strong and the weak,” Obama said in his speech opposing Roberts’s confirmation, “[b]ut given the gravity of the position to which he will undoubtedly ascend and the gravity of the decisions in which he will undoubtedly participate during his tenure on the Court, I ultimately have to give more weight to his deeds and the overarching political philosophy that he appears to have shared with those in power than to the assuring words that he provided me in our meeting.”

I don’t think it’s obvious that Obama was mistaken to take the position that he did. Among the forms of intellectual agency that a judge must use and that a baseball umpire never has to are ones that partake of “political philosophy.” Roberts, I’m sure, knows this. But I’m pretty confident that Obama at the time knew, too, that it’s questionable whether Roberts’s political philosophy—even if Obama measured it correctly—was a proper basis to oppose him. There can be no defensible assessment of Obama’s position one way or the other that doesn’t reflect appreciation of the complexity of the question.

That episode, though, makes it all the more clear that Chief Justice Roberts was not affected by something that could easily have left him with a feeling of permanent resentment.  Not affected, that is, by something he might legitimately have felt (might still feel) as a person but that is not pertinent to him as a neutral judge deciding a case.

I admire the Chief Justice for displaying so vividly and excellently something that reflects the best conception of the profession I share with him. I am grateful to him for supplying us with a resource that I and others can use to try to help others acquire the professional craft sense that deriving and applying neutral of constitutional law demand.

And I’m happy that he did something that in itself furnishes the assurance that ordinary citizens deserve that the law is being applied in a manner that is meaningfully neutral with respect to their diverse ends and interests. They need tangible examples of that, too, because it is inevitable that judges who are expertly and honestly enforcing neutrality will nevertheless reach decisions that sometimes profoundly disappoint them.

It’s in connection with this last point that I am moved to critical self-reflection.

As I said, I admire Chief Justice Roberts and am grateful to him for reasons independent of my views of the merits of Affordable Care Act case. I honestly mean this.

But I am aware of the awkwardness of being moved to remark a virtuous performance of neutral judging on an occasion in which it was decisive to securing a result I support. Or at least, I am awkwardly and painfully aware that I can’t readily think of a comparable instance of virtuous judging that contributed to an outcome that in fact profoundly disappointed me. Surely, the reason can’t be that there has never been an occasion for me to take note of such a performance—and to remark and learn from it.

I have a sense that there are other members of my profession and of my cultural/moral outlook generally who share this complex of reactions toward Chief Justice Roberts’s judging.

I propose that we recognize the sense of anxiety about ourselves that accompanies our collegial identification with him as an integral element of the professional dispositions that his decision exemplifies.

It will, I think, improve our perception to harbor such anxiety. And will make us less likely to overlook-- or even unjustly denounce--the next Judge whose neutrality results in a decision with which we disagree. 

Wednesday
Jun272012

What should I say to Chief Justice Roberts the day after the health care decision?

So it turns out that I'm giving a talk at the annual "Judicial Conference" (a kind of summer retreat) of the U.S. Court of Appeals for the D.C. Circuit on Friday morning. The US Supreme Court -- unless something pretty weird happens -- will have issued its ruling on the constitutionality of the Affordable Care Act the day before (i.e., tomorrow, Thursday).  Speaking right after me (at least so it says on the schedule) ... Chief Justice Roberts.

I had been planning to give my standard talk on the Employee Retirement Income Security Act (ERISA), of course.  But it occurs to me maybe I should address some other topic?

How about the political neutrality of the Supreme Court?

I could start with this proposition: “The U.S. Supreme Court is a politically neutral decisionmaker.”

I don't know how the judges in the room will react -- will they laugh, e.g.? -- but I know that if I was talking to a representative sample of U.S. adults, the vast majority would disagree with me. In a poll from a couple weeks ago, only 13% of the respondents said the Justices decide cases "based on legal analysis," whereas 76% indicated that they believe the Justices "let personal or political views influence their decisions."

Granted, this was before the Court's 5-3 decision on the Arizona "show me your papers" law a couple days ago; maybe that restored the public's confidence?

But assuming not, I think I'll tell the judges, including Chief Justice Roberts, that I'm very confident that the public has no grounds for believing this.  

It's not that I know that the Justices are behaving like the "neutral umpires" that Chief Justice Roberts, in his confirmation hearing, pledged to be.

But I do have pretty good reason to think that even if the Court is deciding cases in a "politically neutral" fashion, most people wouldn't think it is -- because of cultural cognition.

In fact, if I were to give my "standard talk" on Friday, I'd discuss the contribution that cultural cognition makes to our society's "science communication problem."  

People can't determine through their own observations whether, say, the earth's temperature is or isn't increasing, or whether deep geologic isolation of nuclear wastes is safe or not. Rather they must rely on social cues to determine what facts have been authoritatively established by science.

In an environment in which positions on those facts become associated with opposing cultural groups, cultural cognition will impel diverse groups of citizens to construe those cues in opposing patterns. The result will be intense cultural conflict over the validity of evidence generated by experts' engaged in good-faith application of valid scientific methods.

The Supreme Court (and the judiciary as a whole), I believe, have a comparable "neutrality communication" problem. Just as citizens can't resolve on their own complex empirical issues relating to envirionmental risks, so they can't determine on their own technical legal issues relating to the constitutionality of legislation like the Affordable Care Act and the Arizona "show me your papers" law. To figure out whether the Court is deciding these questions correctly, they must rely on social cues--their interpretations of which will be distorted by cultural cognition in the same manner as their interpretations of social cues relating to "scientific evidence" on risks like climate change and nuclear power. 

The existence of widespread conflict over the neutrality of the Court is thus no better evidence that the Justices are politically biased, or their methods invalid, than widespread conflict over risk is evidence that scientists are biased or their methods invalid.

Or to put it another way, neutral decisions of constitutional law (ones made via the good-faith, expert application of professional norms appropriately suited for enforcement of individual liberties in a democratic society) do not publicly certify their own neutrality -- any more than valid scientific evidence publicly certifies its own validity.

Scientists now get that doing valid science and communicating it are two separate things-- and that the latter itself admits of and demands scientific understanding. The National Academy of Science's recent "Science of Science Communication" colloquium attests to that.

So I guess I'll ask Chief Justice Roberts, and his colleagues on the D.C. Circuit (who are really tremendous judges -- the judiciary equivalents of MIT physicists) this: isn't it time for the legal profession to get that doing neutral constitutional law and communicating it are two separate things, too, and that the latter is something that also could be done better with the guidance of scientific understanding of how citizens in a diverse society know what they know?

Tuesday
Jun262012

Nullius in verba? Surely you are joking, Mr. Hooke! (or Why cultural cognition is not a bias, part 1)

Okay, this is actually the first of two posts on the question, “Is cultural cognition a bias?,” to which the answer is “well, no, actually it’s not. It’s an essential component of human rationality, without which we’d all be idiots.”

But forget that for now, and consider this:


Nullius in verba means “take no one’s word for it.”

It’s the motto of the Royal Society, a truly remarkable institution, whose members contributed more than anyone ever to the formation of the distinctive, and distinctively penetrating, mode of ascertaining knowledge that is the signature of science.

The Society’s motto—“take no one’s word for it!”; i.e., figure out what is true empirically, not on the bias of authority—is charming, even inspiring, but also utterly absurd.

“DON’T tell me about Newton and his Principia Naturalis,” you say, “I’m going to do my own experiments to determine the Law of Gravitation.”

“Shut up already about Einstein! I’ll point my own telescope at the sun during the next lunar eclipse, place my own atomic clocks inside of airplanes, and create my own GPS system to ‘see for myself’ what sense there is in this relativity business!’ ”

“Fsssssss—I don’t want to hear anything about some Heisenberg’s uncertainty principle. Let me see if it is possible to determine the precise position and precise momentum of a particle simultaneously.”

After 500 years of this, you’ll be up to this week’s Nature, which will at that point be only 500 years out of date.

But, of course, if you “refuse to take anyone’s word for it,” it’s not just your knowledge of scientific discovery that will suffer. Indeed, you’ll likely be dead long before you figure out that the earth goes around the sun rather than vice versa.

If you think you know that antibiotics kill bacteria, say, or that smoking causes lung cancer because you have confirmed these things for yourself, then take my word for it, you don’t really get how science works. Or better still, take Popper’s word for it; many of his most entertaining essays were devoted to punching holes in popular sensory empiricism—the attitude that one has warrant for crediting only what one “sees” with one’s own eyes.

The amount of information it is useful for any individual to accept as true is gazillions of times larger the amount she can herself establish as true by valid and reliable methods (even if she cheats and takes the Royal Society’s word for it that science’s methods for ascertaining what’s true are the only valid and reliable ones).

This point is true, moreover, not just for “ordinary members of the public.” It goes for scientists, too.

In 2011, three physicists won the Nobel Prize “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae.” But the only reason they knew what they (with the help of dozens of others who helped collect and analyze their data) were “observing” in their experiments even counted as evidence of the Universe expanding was that they accepted as true the scientific discoveries of countless previous scientists whose experiments they could never hope to replicate—indeed, whose understanding of why their experiments signified anything at all these three didn’t have time to acquire and thus simply took as given.

Scientists, like everyone else, are able to know what is known to science only by taking others’ words for it.  There’s no way around this. It is a consequence of our being individuals, each with his or her own separate brain.

What’s important, if one wants to know more than a pitiful amount, is not to avoid taking anyone’s word for it. It’s to be sure to “take it on the word” of  only those people who truly know what they are talking about.

Once this point is settled, we can see what made the early members of the Royal Society, along with various of their contemporaries on the Continent, so truly remarkable. They were not epistemic alchemists (although some of them, including Newton, were alchemists) who figured out some magical way for human beings to participate in collective knowledge without the mediation of trust and authority.

Rather their achievement was establishing that the way of knowing one should deem authoritative and worthy of trust is the empirical one distinctive of science and at odds with those characteristic of its many rivals, including divine revelation, philosophical rationalism, and one or another species of faux empiricism—should be aknowledged as authoritative.

Instead of refusing to take anyone's word for it, the early members of the Royal Society retrained their faculties for recognizing "who knows what they are talking about" to discern those of their number whose insights had been corroborated by science’s signature way of knowing.

Indeed, as Steven Shapin has brilliantly chronicled, a critical resource in this retraining was the early scientists’ shared cultural identity.  Their comfortable envelopment in a set of common conventions helped them to recognize among their own number those of them who genuinely knew what they were talking about and who could be trusted—because of their good character—not to abuse the confidence reposed in them (usually; reliable instruments still have measurement error).

There’s no remotely plausible account of human rationality—of our ability to accumulate genuine knowledge about how the world works—that doesn’t treat as central individuals’ amazing capacity to reliably identify and put themselves in intimate contact with others who can transmit to them what is known collectively as a result of science.

Now we are ready to return to why I say cultural cognition is not a bias but actually an indispensable ingredient of our intelligence.

Part 2

Saturday
Jun232012

Happy 100th birthday, Turing!

& thank you for thinking such cool things & sharing them!

 

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

 

 

Thursday
Jun212012

Politically nonpartisan folks are culturally polarized on climate change

I wrote a series of posts a while back (herehere, & here) on why our research group uses “cultural worldviews” rather than political orientation measures—like liberal-conservative ideology or political party affiliation—to test hypotheses about science communication and motivated reasoning. So I guess this post is a kind of postscript.

Drawing on a framework associated with the work of Mary Douglas and Aaron Wildavsky, we characterize ordinary people’s culturalworldviews—their preferences, really, about how society should be organized—along two cross-cutting dimensions: “hierarchy-egalitarianism” and “individualism-communitarianism.”  We then examine having one or another of the sets of values these two dimensions comprise shape people’s perceptions of risk or other policy-consequential facts.

Because they are unfamiliar with this framework (or more likely worry that their readers will be), commentators describing our work sometimes just substitute “liberal versus conservative” or  “Democrat versus Republican” for the opposing orientations that we feature in our studies.

This can obscure insight when the conflicting perceptions at issue can’t be fully captured by a one dimensional measure. That was so, for example, in our recently published paper on perceptions of violence in political protests, which uncovered very distinct patterns of conflict among “hierarchical individualists” and “egalitarian-communitarian,” on the one hand, and between “hierarchical communitarians” and “egalitarian individualists,” on the other.

The cost is smaller, I guess, when “liberal Democrat” and “conservative Republican” are substituted for “egalitarian communitarian” and “hierarchical individualist” in conflicts that do have a recognizable left-right profile. Climate change is like that.

But what’s still lost in this particular translation is how divided even politically moderate people are on climate change and other environmental issues.

In the figure below, I’ve graphed cultural worldviewscores in relation to political orientation scores for members of a nationally representative sample. What these scatterplots show is that “Hierarchy” and “individualism” are positively correlated with both “conservative” and “Republican,” but only modestly.  

The “average” Hierarchical Individualist (that is, a person whose scores are in the top 50% on both the “hierarchy-egalitarian” and “individualism-communitarianism” scales) has political orientation scores equivalent to an independent who “leans Republican,” and who characterizes him- or herself as only “slightly conservative.”

Likewise, the “average” Egalitarian Communitarian (a person who scores fall in the bottom 50% on both worldview scales) is an independent who “leans Democrat” and who characterizes him- or herself as only “slightly liberal.”

Say we had no way to measure their cultural outlooks and all we knew about two people was that they were independents who “lean” in opposing directions and who characterize their respective ideological leanings as only “slight.” We’d certainly expect them to disagree on climate change, but not very strongly.

Yet in fact, the average Egalitarian Communitarian and average Hierarchical individualist are extremelydivided on climate change risks.

Indeed, they are more polarized than we’d expect two people to be if all we knew was that they rated themselves without qualification as being a “liberal Democrat” and a “conservative Republican,” respectively. (These points are illustrated with my crazy, insane infographic, below, which is based on the regression models to the right! These data are presented in greater detail in the Supplementary Information for our recently published Nature Climate Change article.)

This is just an elaboration—an amplification—of the theme with which I ended part 3 of the earlier series. There I defended what I called the “measurement” over the “metaphysical” conception of dispositional constructs.

We know, just from looking around and paying even modest attention to what we see, that people of “different sorts” disagree about climate change risks. But how to characterize the sorts, and how to measure the impact of being more or less of one than the other?

We could do it with liberal-conservative ideology and “Republican-Democrat” party affiliation. But those are relatively blunt, undiscerning measures of the dispositions in question.

Hierarchy-egalitarianism and Individualism-communitarianism are much more discerning. In statistical terms they explain more variance; they have a higher R2.

As a result, using the worldview measures allows one to locate the members of the population who are divided on climate change with much greater precision.

To observe as much polarization with political orientation measures as one sees with the worldview measures, one must ratchet the political orientation measures way up—toward their extreme values.

But that picture—of intense division only at the partisan extremes—is a gross distortion.

In fact, people who belong to American society's nonpartisan, largely apolitical middle are in the thick of the cultural fray. Tucked into the large mass of people who are watching America’s Funniest Pet Videos are folks every bit as polarized over climate change as the much smaller number of partisan zealots who are tuning into Maddow or Hannity.

One just has to know where to find them—or with what instrument to measure their motivating dispositions.

It's silly to argue about what's "really" causing polarization--"cultural worldviews” or “political ideology.” This metaphysical way of thinking implausibly imagines the two are distinct entities inside the psyche. Instead, they should be understood as simply alternative ways to measure some unobservable (latent) disposition that varies systematically across groups of people and that interacts with their perceptions of risk and related facts.

The only thing worth discussing is how good each is at measuring that thing. They actually are both reasonably good. But I’d say that the worldview measures are generally better than liberal-conservative ideology or party self-identification if the goal is to explain, predict, and formulate prescriptions.

The analysis here illustrates that. Using political orientation measures has the potential to conceal the extent to which even nonpartisan, nonpolitical, completely ordinary folk are polarized on climate change.

And if one can’t see and explain that, how likely is one to be able to come up with (and test the effectiveness of) solutions to this sad problem for our democracy?

Saturday
Jun162012

The "partisan abuse" hypothesis

A reader of our Nature Climate Change study asks:

I was wondering if the anti-correlation of scientific literacy with climate change understanding is muted or reversed as one moves into the middle of the Hierarchy-Egalitarian/Individualism-Communitarianism Axes? Did you consider dividing the group into quartiles for example rather than in halves? 

My response:

Interesting question.

To start, as you know, the negative correlation (itself very small) between science literacy (or science comprehension, as one might refer to the composite science literacy & numeracy scale) & climate change risk perception doesn't take account of the interaction of science comprehension with cultural worldviews. Once the interaction is measured, it becomes clear that the effect of increased science comprehension isn't uniformly negative; it's *positive* as individuals become more egalitarian & communitarian, & negative only as individuals become more hierarchical & individualist

For this reason, I'd say that it is misleading to talk of any sort of "main effect" of science literacy one way or the other. By analogy, imagine a drug was found to decrease the lifespan of men by 9 yrs & increase that of women by 3 yrs. If someone came along & said, "the main effect of this drug is to *decrease* the average person's lifespan by 3 yrs; what an awful terrible drug, it should be banned!" I think we would be inclined to say, "no, the drug is good for women, bad for men; it's silly to talk about its effect on the 'average' person because people are either men or women." Similarly here: people vary in their worldivews, & the effect of science comprehension on their climate change views depends on the direction in which their worldviews tend.

But that's not really important.

I understand your question to be motivated by the idea that the interaction between science comprehension & culture might itself be concentrated among people who have particularly strong worldviews. Perhaps the effect is uniformly positive for everyone except some small set of extremists (extreme hierarchical individualists, it would have to be). In other words, maybe only hard core partisans are using -- abusing, really -- their science comprehension to fit the evidence to their predispositions. That seems plausible to me, and definitely worth considering.

You are right that there is nothing in the analyses we reported that gets at this "partisan abuse" hypothesis. As you likely saw, the cultural worldview variables are continuous, and in our Figures we plotted regression estimates that reflected the influence of the culture/science comprehension interaction across the entire data set. That way of proceeding imposes on the data a model that *assumes* the interaction of science comprehensionis uniform across both worldview variables -- "hierarchy-egalitarianism" & "individualism-communitarianism." We'd necessarily miss an evidence of the "partisan abuse" hypothesis w/ that model.

But we also did try to fit a polynomial regression model to the data. The idea behind that was to see if in fact the interaction between science comprehension & cultural worldviews seemed to vary w/ intensity of the cultural worldviews-- as the partisan abuse hypothesis implies. The polynomial regression didn't fit the data any better than the linear model, so we had no evidence, in that sense, that the interaction we observed was not uniform across the cultural dimensions.

One could also try to probe the "partisan abuse"  hypothesis by slicing the sample up into segments, as you suggest, and seeing if the effect of science comprehension on groups of people who are more or less extreme. But because such effects will always be lumpy in real data, there is a risk that any differences one observes among different segments along the continuum when one splits a continuous measure up into bits will be spurious. See  Maxwell, S. E., & Delaney, H. D. (1993). Bivariate Median Splits and Spurious Statistical Significance. Psychological Bulletin, 113, 181-190 (this was one of statistical errors in the scandalously idiotic "beautiful people have more daughters" paper).

Accordingly, it is better to treat continuous measures as continuous in the statistical tests -- and to include in the tests the right sorts of variables for genuine nonlinear effects, if one suspects the effects might vary across the relevant continuum. That's what we did when we tried a polynomial regression model out.

Still, let's slice things up anyway. Really, let's just *look* at the raw data -- something one always should do before trying to fit a model to them! -- to see if we can see anything that looks as interesting as the "partisan abuse" dynamic is going on. 

 I've attached a Figure that enables that. It fits a smoothed "lowess" regression lines to the risk perception/worldview relationship after splitting the sample at the median into "high" & "low" science comprehension groups. The lines, in effect, show what happens when one regresses risk perception on the worldview "locally" -- to little segments of the sample along the cultural worldview continuum -- for both types (high & low science comprehension) of subjects.

 

What we're looking for is a pattern that suggests the interaction of science comprehension w/ culture isn't really linear; that in fact, science literacy predicts more concern for everyone until you get to some partisan tipping point for subjects who are culturally predisposed to be skeptical by their intense hierarchy or individualism. I plotted a dashed line that reflects that for comparison.

I don't see it; do you? Both lines slope downward (cultural effect), the green one at a steeper grade (interaction), in roughly a linear way. The difference from perfectly linear is just the lumpy or noisy distribution of data you might expect if the  "best" model was linear.

Am open to alternative interpretations or tests!

Oh, since we are on the subject of looking at raw data to be sure one isn't testing a model that one can see really isn't right, here's another picture of the raw data from our study.  It's a scatterplot of "hierarchical individualists" and "egalitarian communitarians" (those subjects who score either in the top 50% of both worldview scales or the bottom 50% on both, respectively) that relates their (unstandardized) science-comprehension score to their perception of climate change risk (on the 0-10 industrial strength measure).

I've superimposed a linear regression line for each. Just eyballing it, seems like the interaction of science comprehension & climate change risk perception is indeed more-or-less linear & is the about the same in its slope for both.

Thursday
Jun072012

How to teach *accepting* science's way of knowing, and how to provoke resistance...

Two days ago, 1000's of kids were helped by their science teachers to catch sight of Venus passing as a little black dot across the face of the sun. They were enthralled & put in awe of our capacity to figure out that this would happen exactly when it did (their teachers told them about brilliant Kepler and his calculations; & if it was cloudy where those kids were, as it was where I happened to be, the teachers likely consoled them, "hey-- same thing happened to poor Kepler!").

We should expect about 46% of them to grow up learning to answer "yes" if Gallup calls and asks them whether they think "God created the world on such & such a date."

But if they have retained a sense of curiosity about how the world works that continues to be satisfied -- in a way that continues to exhilarate them! -- when they get to participate in knowing what is known as a result of science, should we care?  I don't think so.

But if they learn too that in fact they shouldn't turn to science to give them that feeling -- or if they just become people who no longer can feel that -- because they live in a society in which they  are held in contempt by the 54% who have learned to say "of course not! I believe in evolution!" -- even though the latter group  of citizens would in fact score no better,  and would more  than likely fail, a quiz on natural selection, random mutation, and genetic variation -- that would be very very sad.

Wednesday
Jun062012

What Can We Make of the New Pew Poll?

A new Pew Poll, highlighted by TPM, purports to find that party identification is increasingly useful to predict respondents' cultural values, even as the polarizing effects of race, income, religiousity, and gender are have been static over the last 25 years.  Indeed, while in 1987, party identification predicted about an average amount attitude polarization, it now dominates. To put it in terms that a data analyst would appreciate: partisan identity is now explaining more of the variance in attitude than any other factor, and possibly more than most of the rest combined.

What does this mean? The big picture story is partisan re-allignment along value dimensions, itself coincident to/resulting from a number of factors.  (The causal story is complex - -you could say that this is all about the death of the democratic party in the south and its ripples, but that it seems to me is a bootstrapped explanation).  But if you drill down, the data are fascinating - and Gallup helpfully provides some great analysis tools. 

From what I can tell, on important cultural measures of interest to the CCP team, the public at large hasn't changed in material ways in since 1987.  That immediately should cause us to ask some questions about the cognitive illiberalism thesis, which, briefly, posits that motivated cognition poses a increasingly important problem for our ability to reason together liberally.  Look at the scores for questions that should matter to CCP scales, like:

-government regulation of business does more harm than good;

-womens' traditional roles;

-too far in pushing equal rights;

-corporate profits too high.

I don't notice secular trends.  Do you?  By contrast, check out the public's views on redistribution: they've cratered!  (Probably coincident with the passing of the great generation.)  

These flat lines are weird, because I think that we would have predicted increasing differences in the population over time, as individuals became better able to control the flow of information that they received; to create virtual communities (and identities) by choice; to segregate into phyles without ever leaving the home.

Here's the $1,000,000 challenge: if we'd wound the tape back to 1987, wouldn't we have predicted increasing polarization over time on the questions that formed the bases of our scales?  We certainly have said in public that our scales aren't meant to measure some fixed, biological, orientation: they are culturally and temporally contingent.  I certainly don't see how we would have predicted what actually happened, which was a wash overall for cultural polarization, and instead a reorientation of Americans into more cohesive political parties.  Two thoughts follow:

1.  Though it's often thought of as bad for politics (and our ability to get along) it's not obvious to me that partisanship is the same kind of evil that Dan so persuasively flagged in The Cognitively Illiberal State. To argue that very narrowly footed political parties are bad for civic discourse would require us to say that Britain and France and Germany and other Western European countries are marked by lower levels of civic engagement, happiness, and cohesiveness than we are, which is a tough claim to make, to say the least!  But maybe that's not right - perhaps partisan reorientation and cohesion works to reinforce identity formation in a pernicious way.

2.  Regardless of the correctness of the analysis above, I think the Project should think and write more about it's predictive story.  For instance: to the extent that we are finding intense cultural valence on global warming, was that divergence inevitable, or did it result from some factor extrinsic to our research (like strategic behavior).  Why hasn't the GM food movement produced the same public emotion as global warming.  Why was the question of corporate manager salary considered a values question in the 1930s, but isn't today? Would we have predicted these results? 

Wednesday
Jun062012

The evolution debate isn't about science literacy either

A few days ago Gallup released a poll showing that 46% of Americans "hold creationist views."

The almost universal reaction -- among folks that I have contact with; I am very aware that that sample is biased, in a selection sense --was "what is wrong our science education system?!"

Well, lots of things, but the contested state of evolution is actually not a consequence of any such deficiencies -- or at least not of deficiencies in "science education" understood as the propogation of comprehension of what is known by science.

In this sense, the evolution controversy is very much like the climate change one, which, we concluded in our Nature Climate Change study, also is not a consequence of low science comprehension.

Those who study public understanding of science have a better way to investigate the impact of science comprehension here than simply to correlate science literacy & "acceptance" of evolution.  They examine whether those who "accept" have a better grasp of the basic science of evolution than those who "reject."

They don't. There is simply no correlation between "accepting" evolution and understanding concepts like natural selection, random mutation, and genetic variation -- the core of the "modern synthesis" position on evolution.

That is, those who "reject" are as likely to understand those concepts as those who "accept" evolution. In fact, those who accept aren't very likely to understand them in absolute terms. They "accept" what they don't really understand.

This isn't really cause for alarm. Individuals can't possibly be expected to be able to understand and give a cogent account of all the things known by science. Yet they accept zillions of such things that are indeed essential to their living a good life, or even just living (antibiotics kill bacteria; drinking raw milk can make you very very very sick; a GPS system can reliably tell you where you are & how to get someplace else ... ).   

But the critical point here is that scientific comprehension isn't what causes those who accept evolution to accept it or those who reject it to reject it.

What does is their willingness to assent to science's understanding of what's known here as the authoritative account of what's known. Those who "accept" evolution are accepting that. Those who resist aren't.

Moreover, those who resist it on evolution aren't resisting across the board. They accept plenty of things -- orders of magnitude more things -- as known because science says so than they reject.

Evolution is a special kind of issue. The position you take it on it is an expression of who you are in a world in which there are many diverse sorts of people and in which there is a sad tendency of one sort to ridicule and hold in contempt those of another.

So here is an intersting moral question, I think. Is the goal of "science education" to impart knowledge only or should it aim to propogate acceptance, too?

I think it is morally appropriate, in a liberal democratic society, for the state  to promote the greatest degree of basic science knowledge (what Jon Miller calls "civic science literacy") as possible. Citizens must possess that sort of knowledge in order for them to participate meaningfully in public life and for democracy to have any prospect of using the great amount of scientific knowledge at its disposal to make its members healthy, safe, and prosperous.

But I really am not sure that the goal of science education, at least when it is provided by the state, is to make those who know what is known to science also accept it -- that is, assent to science as authoritative to say what is known.

In fact, I have a strong intuition that that sort of goal is profoundly incompatible with the basic premises of political liberalism, which obliges the state to respect the power of individuals to form their own view of the meaning of life.

I do indeed believe that people should accept the authority of science to certify what is known on issues--all issues -- that admit of scientific inquiry. However, my sense is that this is a goal to be promoted by discussion and deliberation among free citizens reasoning with one another and not a position that should be propogated as a moral or political orthodoxy by institutions of the state.

Still I don't mean to insist on this point. I find it difficult. I would actually be grateful to hear what thoughtful peole have to say on it.

I'll be satisfied for now so long as we see and get clear on the point that knowing what is known by science is different from accepting it.

People who make mistakes about what science literacy does & doesn't cause are unlikely to be effective in conveying what is in fact known by science.

And they are also likely to fail to think seriously about the complicated moral questions that state propogation of acceptance distinctively poses.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Miller, J.D., Scott, E.C. & Okamoto, S. Science communication: public acceptance of evolution. Science 313, 765-766 (2006).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

 

Friday
Jun012012

More on the statistical illiteracy/inanity of trying to figure out "yeah, but whose is bigger"

So I said in the last post that the "57% vs. 56%" factoid can't be read to indicate that one side has better comprehension of science.

It was (quite reasonably) pointed out to me in a comment that our Nature Climate Change study reported statistically significant correlations between science literacy and numeracy (and the composite science comprehension scale that aggregated them), on the one hand, and climate-change risk perception, on the other. What bearing does that have on the issue of who has greater science comprehension-- the "yeah, but whose is bigger?" question--in the climate change debate?

To start, there's no tension between the statistical computations here. If one looks at the correlation between the continuous science comprehension (science literacy plus numeracy) scale and climate change, it's negative (r = -0.09) and significant (t-statistic = -3.35). But the difference in the mean scores of the most concerned and least concerned halves of the sample is not significant. That can certainly happen when one splits the sample and treats two continuous measures as categorical ones (the opposite can happen, too).

But that's particularly likely to happen when the correlation between the continuous variables is tiny. That's so here.

People who are numerate are likely to suspect that= -0.09 (r = -0.05 for science literacy by itself!) is small (which is how the paper characterizes this effect)--way too small to be responsible for the intensity of the climate change debate in our society.

But it's actually pretty bad craft to expect anyone to figure out whether an effect size is meaningful from bare correlation coefficients. Readers should be shown the effect in some way that conveys its practical importance.

The question under investigation in our study was what explains climate change conflict--differences in science comprehension or differences in cultural outlooks? One shouldn't really have to know statistics to see the answer in a figure like this:

 

I won't say anything more about the difference between statistical significance and practical significance because there's an excellent post that addresses it in the context of science comprehension and climate change at the Blackboard, in appreciation of & gratitude for which I am posting this:

 

But that leaves room to discuss another, and in my mind, more interesting point about statistical illiteracy that is reflected in obsessing over that effect: it’s meaningless in real-world terms.

The principal finding was that science comprehension interacts with cultural predispositions: individuals who are predisposed by their group values to skepticism become more skeptical, and those predisposed to concern become more concerned, as science comprehension increases.

So it is in fact misleading to characterize greater science comprehension as having any “main effect” toward either skepticism or acceptance. It has one or the other depending on other characteristics.

The only thing the “main effect” really conveys in these circumstances is the frequency of the two types (or maybe the intensity of the effect in one or the other, if it varies meaningfully) in one’s sample. Moreover, that’s true even if one’s “sample” is in fact a census: if the correlation when one looks at the population as a whole is negative, then there are simply more people out there predisposed (and/or more strongly predisposed) to fit the evidence to a "skeptical" conclusion; and if the correlation is positive — then the number predisposed (or predisposed more intensely) to see evidence as justifying concern is greater. End of story.

I talked to a researcher recently who tried to convince me that one should see a small positive correlation between science literacy & some other issue that had an interaction with ideology as meaning that when one “controls” for ideology, science literacy increases concern …” I kept trying to tell him to think about what it was he was actually modeling and how silly it is to describe the sample “mean” as the “effect controlling for” something that interacts with a characteristic that varies systematically in people in the real world.

I felt like I was arguing with the guy from spinal tap who kept saying, “mine goes to 11.”

Wednesday
May302012

Who has a better comprehension of science--"skeptics" or "nonskeptics"?

Neither, as far as I can tell.

This wasn’t a question we tried to answer directly or reported data on in our Nature Climate Change paper.

But I have been asked a few times now about a Fox News report on our study that states that those who are less concerned about climate change scored “57%” and those who are more concerned “56%” in our measure of science comprehension.

I am guessing the reporter derived the conclusion from this graphic, which is one I produced and circulated to people, including the reporter, in response to questions about a working paper that reported data from the study ultimately published in NCC.

It shows the mean or average number of correct responses on the combined science literacy/numeracy scale (a measure of "science comprehension,” essentially) for study subjects whose responses put them in the top 50% & bottom 50% of the sample on "climate change risk perceptions," respectively.

The bottom 50% got, on average, 12.6 out of 22 correct. The top 50% got 12.3.

The "56%" & "57%" figures are not in the Figure--or in anything else related to our study. But they are the numbers one gets when one divides 12.3 & 12.6 by 22, respectively.  

As can can be seen, this difference is not statistically significant. Not even close. Indeed, I put the graphic together so that I could answer the stock "who knows more" query-- I call it the "yeah, but whose is bigger" question -- by saying "no one, see!"

If there are people out there (apparently there are; I'm getting lots of email...) who think this is meaningful evidence that one side knows more than the other about science, they really are missing the point. In fact, they are making the kind of mistake that helps explain how it is that the "smarter" half of the population gets a score of 57% on a measure like this.

The gap between those who know more science and those who know less doesn't explain conflict over climate change science in our society.

But it's beyond question that the low average state of science literacy is a condition that detracts from our capacity for enlightened self-government.

Monday
May282012

"How confident should we be ..."

A thoughtful journalist asks in relation to our  Nature Climate Change  study:

It would be really helpful to get your reflection on the research.   In particular, I'm interested in the polarising effect you were able to identify. From the figure (Fig.2) this appears to be quite subtle, albeit in the opposite direction to that which was predicted by the SCT thesis.   It would be great if you could identify to what extent/how confident we can be to say that increasing numeracy and literacy polarises risk perception about climate change, and what can explain this polarisation.

This was such a thoughtful way of putting the question, I felt impelled -- only in part by OCD; one shouldn't ask a good question if one wants an imprecise, casual response -- to give a reasonably precise & detailed answer:

1.  All  study results are provisional. That's in the nature of science. Valid studies give you more evidence than you otherwise would have had to believe something. They never "settle" the issue; one continues to revise one's assessment of what to treat as true and how likely it is not to be as more valid studies, more valid evidence, accumulates. Forever & ever (Popper 1962).

So it is never sensible (it is a misundersanding of the nature of empirical proof) to say, "this study proves this" or "this study doesn't necessailry prove that" etc. Instead it is very sensible to ask, as you have, "how confident should we be" in a particular conclusion given the evidence presented in a particular study.  

2. As you know, our study investigated two hypotheses: the science comprehension thesis (SCT), which attributes public conflict over climate change to deficits in science comprehension; and the cultural cognition thesis (CCT), which asserts that conflict over climate change is a consequence of the unconscious tendency of individuals to fit the beliefs about risk to positions that dominate in their group, and which in its strongest form would say that this tendency will be reinforced or magnified by grater science comprehension, which can be used to promote such fitting. 

3. The study furnishes relatively strong  evidence that SCT is incorrect. SCT would predict that cultural polarization abates as science comprehension increases. Even if we had found that the impact of science comprehension on cultural polarization was  nil, the study would supply the basis for a high degree of confidence that public conflict over climate change is not a consequence of low science comprehension. 

4. The study is consistent with CCT and furnishes modest  evidence that CCT in its strongest form is correct. That position would predict that cultural polarization will be greater among individuals with the greatest science comprehension. The results fit that hypothesis-- on both climate change & nuclear power risks; the latter helps to furnish more reason to think that the effect is genuine one for climate change.

But I'd say only modest evidence mainly because of the design of the study. It's observational --correlational -- only. Observed correlations that fit a hypothesis supply supporting evidence in proportion to which they rule out other explanations. Maybe something else is going on that causes both increased science comprehension & increased polarization in certain people. The only way to tell is through (well designed) experiments. We are conducting some now. 

5.    You note the effect size of the interaction is modest. Maybe; it's hard to know how to characterize such things in the abstract (and realize, too, that polarization is so great even for low-comprehending respondents that it would be hard for it to grow much for high-comprehending respondents!).

The size of the interaction effect we observed is probably about what you would expect for an observational study, and if the source of the effect is CCT, it should be easy to produce much more dramatic effects through properly designed experiments   (Cohen, Cohen, Aiken & West 2003, pp. 297-98). So rather than try to extract more information from the effect size about how confident or not to be in the strong CCT position, it makes sense to do experiments. Again that's what we are now doing.

6. By itself, then, the study furnishes only modest reason to be confident in CCT (in its strongest form) relative to other possibilities (one has to be able to identify such possibilities, of course, in order to have any reason to doubt CCT; I can think of possibilities, certainly). I myself am more than modestly confident -- but only because this study is not the only thing I count as evidence that (strong) CCT is correct.

7. An aside: Nothing in our study suggests that making people more science literate or numerate  causes  polarization. If CCT is correct, there is something about climate change (and certain other issues) that makes people try to maximize the fit between their beliefs and positions that predominate within their groups, which themselves are impelled into opposing stances on certain facts. That thing is the cause in the practical, normative sense. We should find it and get rid of it.

references:

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Mahwah, N.J.: L. Erlbaum Associates.

Popper, K. R. (1962). Conjectures and refutations; the growth of scientific knowledge. New York,: Basic Books.

 

Sunday
May272012

Climate change polarization "fast and slow"

Our study on the effects of science literacy and numeracy on climate change risk perceptions is now out in Nature Climate Change. We find that individuals who display high comprehension of science (i.e., those who score higher in science literacy and numeracy) are in fact more culturally polarized than those who display low science comprehension.

I’ve commented before on how these data relate to the popular surmise that seeming public ambivalence toward evidence on climate change reflects the predominance of what Kahneman (in his outstanding book Thinking: Fast & Slow, among other places) calls “system 1” reasoning (emotional, unconscious, error-prone) on the part of members of the public.

Our findings don’t fit that popular hypothesis. On the contrary, they show that individuals disposed to use system 2—conscious, reflective, deductive—reasoning (a disposition measured by the numeracy scale) are even more culturally divided than those disposed to use system 1.

The interesting thing is that Kahneman himself recognized just last week that system 2 as well as system 1 might be implicated in climate change conflict.

In his Sackler Lecture (strongly recommended viewing) at the National Academy of Science’s Science of Science Communication Colloquium (say that three times fast), Kahneman explicitly commented on the connection between his theory of dual process reasoning and cultural cognition.

He recognized that one would expect, consistent with System 1, that ordinary members of the public would fit their perceptions of climate change risk to emotional resonances, which themselves might vary systematically across persons with diverse values.

At the same time, however, Kahneman argued against assuming system 2 would sort this disagreement out. Often “system 2 is just the spokesperson for system 1,” he said. In other words, people are likely to recruit their systematic, “slow” reasoning skills when necessary to reach the conclusion they prefer and not rely only on “fast” heuristic ones.

The point of the study, in fact, was to test pit two plausible alternative hypotheses about cultural cognition and dual process reasoning against one another. 

One attributes the influence of cultural values on risk perception to system 1, viewing cultural cognition as essentially a heuristic substitute for the ability to comprehend complicated scientific evidence.  Our findings (including the absence of any overall connection between science literacy and climate change concern) undermine that view.

The other hypothesis views cultural cognition as a species of motivated reasoning that is as likely to shape system 2 as well as system 1. Our finding of increased polarization among the most science comprehending members of our sample lends support to this position.

In the paper, we suggest that the alliance between cultural cognition and system 2 is actually perfectly rational at an individual level. Ordinary members of the public can't have a much bigger stake in forming views that match those of their peers on controversial issues than they do in getting the science right on climate change: making a mistake on the latter has zero impact on the risks they face (nothing they do as individual voters or consumers matters enough to make a difference) but screwing up the former can result in their being shunned by people whose emotional and material support they covet. 

So everyone tries to fit the evidence to positions that predominate in his or her group. And those who know a lot of science and are good at technical reasoning do an even better job.

The result is a tragedy – of the risk perceptions commons—and it occurs whether people reason “fast” or “slow.”

Still, once we have determined through systematic thought and actual evidence that system 1 alone is not to blame, we can then turn to identifying (again, through empirical testing; creative guessing is good only for hypotheses) what sorts of communication strategies might enable culturally diverse citizens to use their reasoning in a manner that benefits them all.

citation:

.

 

 

Thursday
May242012

I see "They Saw a Protest"

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction just came out in the Stanford Law Review.

The article--which was a team effort involving me, David "Shining a Light" Hoffman, Danieli "I'll Have Another" Evans, Donald "Shotgun" Braman & Jeff "Bear Claw" Rachlinski--features an experiment that tests the impact of cultural cognition on perceptions of facts relevant to the line between "speech" and "conduct" under the First Amendment.

Experiment subjects were assigned to play the role of jurors in a case in which protesters are suing the police for breaking up the protestors' demonstration. The police, subjects were told, claim the protestors were threatening onlookers and blocking their access to a building. The protestors say they were just engaged in impassioned advocacy.

The parties agree that the key piece of evidence is a video of the protest. The subjects are instructed to watch the video and then report what they saw and determine whether it counts as "threatening," "intimidating" or "blocking" under a specified law.

The experimental manipulation involved the supposed nature of the protest. Half the subjects were told that the protestors are demonstrating against abortion rights in front of an abortion clinic. The other half were told that the protestors are objecting to the military's then-existing "Don't ask, don't tell" policy outside a college campus recruitment center.

Consistent with our hypotheses, we found that what subjects saw depended on whether the position the protestors were represented to be taking was congenial or hostile to the subjects' own cultural outlooks. Thus, egalitarian individualists disagree with hierarchical communitarians who are in the same experimental condition (either "abortion clinic" or "military recruitment center") but disagree with other egaligatarian individualists who are in the opposing experimental condition.

The disagreement, moreover, is over facts--like whether the protestors "screamed in the face" of pedestrians and blocked them from entering the clnic/recruitment center. 

This is a problem for the First Amendment, which tries to impose an obligation of state neutrality by confining regulation of putative expression to harms that can be defined independently of any negative reaction people might have toward the speaker's ideas. People have a hard time applying this rule, we find, because they are unconsciously motivated to see these sorts of "noncommunicative harms" -- like threats, intimidation, blocking -- when behavior conveys an idea that offends their values.

The study was patterned on a classic 1950s study in social psychology entitled "They Saw a Game." In it, researchers found that students from two Ivy League colleges were more likely to see the penalty calls of a referee as correct or incorrect depending on whether the rule violation was being attributed to their college's football team or its opponent. This was probably the first experimental demonstration of "motivated reasoning."

The most fun part of doing the study was making the movie. We tried really hard but couldn't find any stock footage of demonstrations that could plausibly be described as either an abortion protest or a military recruitment center protest. People who engage in one tend to look very different from the other.

Fortunately (for us), members of the infamous Westboro Church came to town (Cambridge, Massachusetts, in the winter of 2009). When they show up to preach hate against gays and lesbians, so do massive numbers of counterdemonstrators.  

We managed to cull quite a number of useable scenes from 90 minutes of footage, and were able to confirm in a pretest (of judges and lawyers!) that viewers would believe whichever of the stories we told them about what the demonstration was about, and where it occurred.

Then in an even greater stroke of luck, the U.S. Supreme Court granted review in a case in which the parents of a soldier at whose funeral the Church members demonstrated were awarded $5 million in damages. The Court overturned the verdict on the ground the distress of the emotional distress of the parents was a noncognizable "communicative harm" under the First Amendment.

We were able to kick out a timely study result showing that if a state now passes a law prohibiting groups like the Westboro Church from "intimidating" funderal attendees, the jury's factual determinations will likely be unconsciously guided by the very sorts of things the Court said were not proper bases for damages in the Westboro case. Oh well!

Actually, our point is that it's not enough (maybe not even of any use) to have a doctrine that seems great as a matter of political philosophy if that doctrine imposes psychologically unrealistic demands on decisonmakers.

Constitutional law needs a dose of psychological realism. 

Monday
May212012

NAS says: Listen to the science of science communication

National Academy of Science President Ralph Cicerone (foreground) & Nobelist Daniel Kahneman during the Q&A that followed Kahneman's (outstanding) lecture.

This picture really captures it, I think.

The NAS's Science of Science Communication Sackler Colloquium is modeling what the practice of science & science-informed policymaking needs to do: start listening to the science of science communication, the foundational insights of which reflect the work of Kahneman (and Amos Tversky, Paul Slovic & Baruch Fischhoff, among others) on risk perception.

I feel very optimistic today!

 

Sunday
May202012

Protecting the science communication environment: sneak preview

 

Am embarking soon (was supposed to already; small travel misadventure) for NAS Science of Science Communication colloquium. Attached are slides that I'm sending my co-panelists & commentators (I think they'd like a text but I don't speak from one, or use notes, when doing a talk).

Probably will have to shrink it -- so maybe this is "director's cut" as well as "sneak peek."

 

But if you have time on your hands, tune in (my talk is Tues. @3:15; agenda for event here).

Thursday
May172012

The science of protecting the science communication environment

Am giving a talk on Tuesday at the NAS's Sackler Colloquium on the Science of Science Communication. Was asked to submit an "exeuctive summary" for the benefit of commenters. This is it: 

The Science of Science Communication and Protecting the Science Communication Environment

Promoting public comprehension of science is only one aim of the science of science communication and is likely not the most important one for the well-being of a democratic society. Ordinary citizens form quadrillions of correct beliefs on matters that turn on complicated scientific principles they cannot even identify much less understand. The reason they fail to converge on beliefs consistent with scientific evidence on certain other consequential matters—from climate change to genetically modified foods to compusory adolescent HPV vaccination—is not the failure of scientists or science communicators to speak clearly or the inability of ordinary citizens to understand what they are saying. Rather, the source of such conflict is the proliferation of antagonistic cultural meanings. When they become attached to particular facts that admit of scientific investigation, these meanings are a kind of pollution of the science communication environment that disables the faculties ordinary citizens use to reliably absorb collective knowledge from their everyday interactions. The quality of the science communication environment is thus just as critical for enlightened self-government as the quality of the natural environment is for the physical health and well-being of a society’s members. Understanding how this science communication environment works, fashioning procedures to prevent it from becoming contaminated with antagonistic meanings, and formulating effective interventions to detoxify it when protective strategies fail—those are the most critical functions science communication can perform in a democratic society.

In my remarks, I will elaborate on this conception of the science of science communication. I will likely illustrate my remarks with reference to findings on formation of HPV-vaccine risk perceptions, culturally biased assimilation of evidence of scientific consensus, the polarizing impact of science literacy and numeracy on climate change risk perceptions, and experimental forecasting of emerging-technology risk perceptions.  I’ll also describe the necessity of public provisioning to assure the quality of the science communication environment, which like the quality of the physical environment is a collective good that is unlikely to be secured by spontaneous private ordering.

If any of the other panelists would like to form a more vivid impression of my remarks, they might consider taking a look at:

1. Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010); and

2. Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Wednesday
May162012

Is Cultural Cognition Culture-Specific? 

Is cultural cognition culturally specific?  

I just read a great piece over on the PLoS Blog about the cultural specificity of many purportedly universal psychological biases / mechanisms.  As an example, the blog uses the famous Müller-Lyer Illusion.  You probably know of it.  In the image below, many people see the line on the right as longer than the one on the left.  

For almost a hundred years, social psychologists thought this a universal illusion.  It turns out, though, that this illusion is actually acute only in those who live in modern urban environments -- environments where straight lines, flat sides, and sharp corners are common.  When, in 1966, Marshall H. Segall conducted a study across cultural groups, he found tremendous variation (as illustrated in the graph below). 

For folks who are interested in the phenomenon of cultural cognition, this raises an interesting question: Is cultural cogntion itself culture-bound?  The answer, I think, is either "probably yes" or "probably no" depending on what is meant by "culture-bound".  

The "probably yes" answer obtains if one were to try to use the same value measures across highly distinct cultural groups.  There is no reason to believe that San foragers or the Fang are divided over the questions that comprise the cultural value measures we use to distinguish US subjects from one another.  It wouldn't make sense (at least without more evidence) for us to presume our measures are universal.  

But that isn't really what the PLoS Blog post is about.  It asks whether the underlying phenomenon itself is generalizable.  One could broaden the way that such illusions are characterized in order to account for visual training and local adaptions.  Do people see view depth-cues that are relevant to their conceptual contexts.  The newly recast "local cues for depth perception" bias could still plausibly be universal. 

The phenomenon of cultural cognition, I would argue, is closer to the latter than the former.  It is one in which people develop factual beliefs that support or are consistent with their preferred social orderings (typically with the life-ways and values of their in-groups given high status).  If viewed this way, the answer is "probably no" because the theory derives from observations by anthropologists across many different cultural groups.  (I can't say "definitively no" or even "almost certainly no" since we haven't done extensive work across these non-Western cultural groups ourselves.)  More recently, a more general form of this has been studied as "motivated cognition" by social psychologists.  For cultural cognition as a general concept to be culture-bound, the phenomenon of motivated cognition itself would have to be culture-bound.  And, because the idea of motivated cognition is something that we use to describe differences in belief-formation across cultures, it would be very hard to construe it as culture-bound as well.  

But then again, it may be that my sample is too limited -- indeed motivated cognition would suggest that I would be particularly motivated to not notice contrary evidence! Perhaps it just seems obvious to me that the everyone sees the world as shorter or longer as befits there preferred social order when, in fact, there are some groups who do not.  

But one thing we can be fairly certain of: these groups would have to be very distinct from the main groups involved in various forms of culture wars in the United States.  As Dan has pointed out in numerous posts at this point, there is very strong evidence that whatever cultural groups might be immune to cultural cognition, they are not the cultural groups who are involved in popular political debates in this country.  Your cultural adversary may fall foul of cultural cognition, but the fact that you have cultural adversary suggests that you are just as likely to yourself.