follow CCP

Recent blog entries
Saturday
May242014

Weekend update: You'd have to be science illiterate to think "belief in evolution" measures science literacy

It's been soooo long -- at least 3 weeks! -- since I last did a post on the relationship between "belief in evolution" & "science literacy."

That's just not right!  Plus I have some cool new data on this issue.

But let's start with a reprise of the basics -- because one can never overstate how aggressively ignored they are by those who flip out & let loose with a toxic stream of ignorance & cultural zealotry every time a polling organization announces the "startling" news that nearly 50% of the US public continues (as it has for decades) to say "no" when asked whether they believe in evolution (in addition, if one asks how many of the "believers" subscribe to a "naturalistic" or Darwinian view as opposed to a "theistic" variant, the proportion plummets down all the more-- for "Democrats" as well as "Republicans" blah blah blah).

First, there is zero correlation between saying one "believes" in evolution & understanding the rudiments of modern evolutionary science.

Those who say they do "believe" are no more likely to be able to be able to give a high-school-exam passing account of natural selection, genetic variance, and random mutation -- the basic elements of the modern synthesis -- than than those who say they "don't" believe.

In fact, neither is very likely to be able to, which means that those who "believe" in evolution are professing their assent to something they don't understand.

That's really nothing to be embarrassed about: if one wants to live a decent life -- or just live, really --one has to accept much more as known by science than one can comprehend to any meaningful degree.

What is embarrassing, though, is for those who don't understand something to claim that their "belief" in it demonstrates that they have a greater comprehension of science than someone who says he or she "doesn't" believe it.

Second, "disbelief" in evolution poses absolutely no barrier to comprehension of basic evolutionary science.

Fantastic empirical research shows that it is very very possible for a dedicated science educator to teach the modern synthesis to a secondary school student who says he or she "doesn't believe" in evolution.  

The way to do it is to do the same thing that one should do for the secondary school student who says he or she does believe in evolution & who, in all likelihood, doesn't understand it: by focusing on correcting various naive misconceptions that have little to do with belief in the supernatural, etc., & everything to do with the ingrained attraction of people to functionalist sorts of accounts of how natural beings adapt to their environments.

The thing is, though, even after acquiring knowledge of the modern synthesis-- likely the most awe-inspiring & elegant, not to mention astonishingly useful, collection of insights that human reason has ever pried loose from nature--the bright kid who before said "no" when asked if he or she "believes" in evolution is not any more likely to say that he or she now "believes" it

Indeed, confusing "comprehension" with profession of "belief" is a very good way to assure that those kids who are disposed to say they "don't believe" won't learn these momentous insights.

As Lawson & Worsnop observed in the conclusion of their classic study (the one that presented such amazingly cool evidence on how to teach evolution in a way that excited kids of all cultural outlooks to want to learn it), 

[E]very teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to do so, not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern-a pattern that appears to be necessary for independent learning and critical thought.

There are actually some who say in response, "Not good enough; it is essential not merely to impart knowledge but also to extract a profession of belief too!"

When someone says that, he or she helps us to see that there are actually illiberal sectarians on both sides of the "evolution in education" controversy in this society.

Third -- and here we are getting to the point where the new data come in! -- profession of "belief" in evolution is simply not a valid measure of science comprehension.

This is very much related to what I have already recounted but is in fact a separate point.

Because imparting basic comprehension of science  in citizens is so critical to enlightened democracy, it is essential that we develop valid measures of it, so that we can assess and improve the profession of teaching science to people.

What should be measured, in my view, is a quality of  ordinary science intelligence -- not some inventory of facts ("earth goes 'round the sun, not other way 'round-- check!") but rather an ability to to distinguish valid from invalid claims to scientific insight and a disposition to use in one's own decisions science's signature style of inference from observation.

The National Science Foundation has been engaged in the project of trying to formulate and promote such a measure for quite some time. A few years ago it came to the conclusion that the item "human beings, as we know them today, developed from earlier species of animals," shouldn't be included when computing "science literacy."

The reason was simple: the answer people give to this question doesn't measure their comprehension of science. People who score at or near the top on the remaining portions of the test aren't any more likely to get this item "correct" than those who do poorly on the remaining portions.

What the NSF's evolution item does measure, researchers have concluded, is test takers' cultural identities, and in particular the centrality of religion in their lives.

Predictably, the NSF was forced to back off this position by a crescendo of objections from those who either couldn't get or didn't care about the distinction between measuring science comprehension and administering a cultural orthodoxy test. The NSF regularly notes the controversy but prudently distances itself from what the significance of it.

But those of us who don't have to worry about whether taking a stance will affect our research budgets, who genuinely care about science, and who recognize the challenge of propagating widespread comprehension and simple enjoyment of science in a culturally pluralistic society (which is, ironically, the type of political regime most conducive to the advance of scientific discovery!) shouldn't equivocate.

We should insist that science comprehension be measured scientifically and point out the mistakes -- myriads of them -- being made by those who continue to insist that professions of "belief" in evolution are any sort of indicator of that.

I've reported some evidence before in this blog that reinforces the conclusion that "belief" in evolution is a measure of who people are and not what they know.

Well, here's some more.

Following up on a super interesting tidbit from the 2014 NSF Science Indicators, I included alternate versions of the conventional NSF Indicator "evolution" item in a science comprehension battery that I administered to a large (N = 2000) nationally representative sample earlier this month.

One was the conventional "true-false" statement, "Human beings, as we know them today, developed from earlier species of animals.”

The second simply added to thist sentence the introductory clause, "According to the theory of evolution, ..."

The NSF had reported on a General Social Science module from a few years ago that found that the latter version elicits a much higher percentage of "true" responses.

Well, sure enough.   

As the Figure at the top of the post shows, the proportion who selected "true" jumped from 55% on the NSF item to 81% on the GSS one!

Wow!  Who would have thought it would be so easy to improve the "science literacy" of benighted Americans (who leaving aside the "evolution" and related "big bang" origin-of-the-universe items already tend to score better on the NSF battery than members of other industrialized nations).

Seriously: as a measure of what test takers know about science, there's absolutely no less content in the GSS version than the NSF.  Indeed, if anyone who was asked to give an explanation for why "true" is the correct response to the NSF version failed to connect the answer to  "evidence consisntent with the theory of evolution  ..." would be revealed to have no idea what he or she is talking about.

The only thing the NSF item does that the GSS item doesn't is entangle the "knowledge" component of the "evolution" item (as paltry as it is) in the identity-expressive significance of "positions" on evolution.  

Want some more evidence? Here you go:


This figure shows the relationship between the probability of a "true" response to the respective versions of the question conditional "religiosity" & "science comprehension." (The figure graphically reports the results of a regression model. If you want to see the raw Click me--I will make you more science literate, I swear!data, click on the inset to the left!)

The former was measured by aggregating into a scale responses to items on self-reported frequency of church attendance, frequency of prayer, and importance of God (α = 0.87).

The latter was formed by combining the NSF's science indicator battery (excluding the "evolution" one, to avoid circularity) with a set of Numeracy and Critical Reflection Test items.  The NSF indicators, a collection of "true-false" items,  can be seen as comprising knowledge of elementary facts; the additional items assess the sorts of reasoning skills--including, in particular, the disposition and ability to make valid inferences from quantitative and other forms of information--that a person needs in order reliably to acquire scientific knowledge. 

The items cohere nicely, forming a highly reliable unidimensional scale (α = 0.84), which I scored with an item response theory model. 

Indeed, the main reason for collecting data on the GSS and NSF variants of the evolution item was to see what the frequency of "true" responses to them would reveal about the item's relative connection to religious identity and science comprehension.

These data answer that question.

The panel on the left confirms that the NSF item does indeed measure religious identity, not scientific knowledgeable.  

Or maybe one can see it as indicating science comprehension for relatively secular folks, since in them one sees what one would expect if that were the case--namely, that the probability of answering "true" goes up as people become progressively more comprehending of science.

But the probability of answering "true" doesn't go up--if anything it goes down--as individuals who are above average in religiosity become more science comprehending.  That's manifestly inconsistent with any inference that the answer to the question indicates the science comprehension of people with a more religious identity. (In case you were wondering -- and it's perfectly reasonable to -- there was a fairly minor negative correlation-- r = - 0.17, p < 0.01-- between religiosity and science comprehension.)  

Now behold the panel on the right!

Here we do see exactly what one would expect of an item that indicates (i.e., correlates, because it's presumably caused by) science comprehension--an increasing probability of answering "true" -- for both non-religious and religious individuals!

By adding the introductory clause, "According to the theory of evolution," the GSS question disentangles ("unconfounds" in psychology-speak) the "science knowledge" component and the "identity expressive" components of the item.

Gee, Americans aren't that dumb after all!

Or maybe they are; this is too easy a question if one wants to figure out whether Americans or anyone else really knows anything about science: some 80% of the respondents answer it correctly -- a figure that rapidly approaches 100% among those of even middling science comprehension.

So ditch this question & substitute for it one more probative of genuine science comprehension -- like whether the test taker actually gets natural selection, random mutation, and genetic variance, which are of course the fundamental mechanisms of evolution and which kids with a religious identity can be taught just as readily as anyone else.

Or actually, how about this.

Instruct the test taker to reflect on the graph above and then respond to the item, 

"'Belief in evolution' is a valid measure of a person's science literacy," true or false?

Thursday
May222014

What is to be done? Let's start with why ... a fragment

From still another thing I'm working on that is distracting me from my main job--writing blog entries:

You asked me to describe what I want to do.  I think I’m more likely to convey that if I start with an account of why.

Two things concern me.  The first is the failure of professions that exist to enlarge, disseminate, and exploit the insights of valid empirical inquiry to use those methods to improve their own proficiency in enlarging, disseminating, and exploiting scientific knowledge.  Call this the “meta-empiricism spectacle” (MS).

Call the second problem “Popper’s revenge” (PR). Cultural pluralism makes liberal democratic societies uniquely congenial to the advancement of scientific inquiry; at the same time, however, it multiplies the occasions for polarizing forms of status conflict between the cultural groups within which diverse citizens necessarily come to know what’s known.  This dynamic puts at risk citizens’ enjoyment of both the promise of tolerance and the enormity of knowledge that are the hallmarks of liberal democratic societies.

MS and PR interact.  As a result of their failure to apply empirical methods to themselves, the professions that traffic in empirical knowledge—from conservation advocacy groups to government regulatory agencies, from science journalists to public health professionals, from educators to judges—fail to negotiate the forms of illiberal status competition that impede public recognition of what’s known to science.

I want to help address these problems....

But in any case, you now have a sense of why; so here is what I want to do.

I am intent on stimulating and being a party to the creation of as many projects as possible aimed at creating “evidence-based practices” within the professions most responsible for assuring reliable recognition of what science knows by the culturally diverse individuals and groups whose welfare such knowledge can enhance....

Wednesday
May212014

More on public "trust of scientists": *You* tell *me* what it means!

Okay, so I've done a good number of posts on "trust" in science/scientists. The basic gist of them is that I think  it's pretty ridiculous to think that any significant portion of the US public distrusts the authority of science -- epistemic, cultural, political, etc. -- or that partisan divisions in regard to trust in science/scientists can plausibly explain polarization over particular risks or other policy-relevant facts that admit of scientific inquiry (vice versa is a closer call but even there I'm not persuaded).

So here's some more data on the subject.

It comes from a large (N = 2000) nationally representative survey administered as part of an ongoing collaborative research project by the Annenberg Public Policy Center and CCP (it's a super cool project on reasoning & political polarization; I've been meaning to do a post on it -- & will, tomorrow"!).

The survey asked respondents to indicate on a 6-point "agree-disagree" Likert measure whether they "think scientists who work" (or in one case, "do research for") in a particular institutional setting "can be trusted to tell the public the truth."

The institutions in questions were NASA, the CDC, the National Academy of Sciences, the EPA, "Industry," the military, and "universities."

We had each subject evaluate the trustworthiness of only one such group of scientists.

Often researchers and pollsters ask respondents to asses the trustworthiness of multiple groups of scientists, or of scientists generally in relation to multiple other groups.

One problem with that method is that it introduces a "beauty pageant" element in which respondents rank the institutions.  If that's what their doing, one might conclude that the public "trusts" a group of scientists or scientists generally more than they actually do simply because they trust the others even less.

So what did we find?

I'll tell you (just hold on, be patient).  

But I won't tell you what I make of the findings. 

Do they support the widespread lament of a creeping "anti-science" sensibility in the U.S.?  

Or the claim that Republicans/conservatives in particular are anti-science or less trusting in science than they were in the past.

Or do they show "the left" is in fact "anti-science" -- as much so or more than "the right" etc.

You tell me!

Actually, I'm sure everyone will come to exactly the same conclusion on these questions.  Here as elsewhere, the facts speak for themselves!







Tuesday
May202014

The "generalizability problem" -- a fragment

From something I'm working on (one of many things distracting me from this blog; I've experienced a curious inversion recently in proscrastination diversions....)

One of the major challenges confronting the science of science communication is generalizability.  This problem is obvious when researchers engage in  lab experiments. By quieting the cacophony of uncontrollable real-world influences, such experiments enable the researcher to isolate and manipulate mechanisms of interest, and thus draw confident inferences about their significance, or lack thereof. But how, then, can one know whether the effects observed in these artificially tranquil conditions will hold up in the chaotic real-life environment from which the researcher sought refuge in the lab? 

It would be a mistake, though, to think that this difficulty reflects some fatal defect in laboratory methods.  And not just because such methods do indeed play an indispensable role in the formation of communication strategies that can subsequently be tested outside the lab. For any empirical testing that occurs in the field must also confront the question of generalizability: how is one to know that what worked in one distinctively messy real-world setting will work in another distinctively messy one?

The generalizability problem is central to the motivation for our proposal.  Disturbingly, a large fraction of researchers offering counsel to conservation advocates and policymakers simply ignore this issue altogether. 

But just as bad, a large fraction of the remainder try to address it in the wrong way.  They believe that the goal of empirical research is to identify a fixed set of universally effective “techniques” or “best practices” that can, with the benefit maybe of cartoon-illustrated instruction manuals, be confidently and more-or-less thoughtlessly applied by communicator "consumers." 

But in fact, the only technique of the science of science communication that generalizes—the sole valid “best practice” it has to offer—is its method. Successful lab experiments and field studies alike do enlarge understandings of how the world works. But how the insights they generate can be brought successfully to bear on any new problem will always be a question that those promoting science-informed conservation policymaking will have to answer for themselves.  The only way they can reliably do so, moreover, is by using empirical methods to adapt what the science of science communication knows to the distinctive circumstances at hand.  

Perfecting knowledge of how to use empirical methods in the everyday practice of conservation-science communication—so that the generalizability issue will always be confronted and confronted effectively—is the whole point of the proposed ....


Sunday
May182014

"Energy future 2030" talk (slides, video)

Thursday
May152014

Some "pathological" public risk perceptions & a whole bunch of "normal" ones

From slides in tak about to give at a biotech conference in Syracuse.  Political differences (or lack thereof) in top slide & "science comprehension" magnification of the same (or lack thereof) in bottom.

More later -- but if anyone wants to offer their own views in the meantime, freel free!

Tuesday
May132014

So much for that theory . . . (fracking freaks me out  #2)

Huh.

So having been freaked out to discover how pervasively polarized members of the public appear to be about fracking despite knowing nothing about it, I resolved to do a little experiment.

In the previous data collection, I had measured perceptions of fracking risks using the "industrial strength measure," which solicits a rating of how "serious" a societal risk some activity poses to "human health, safety, or prosperity."

My thought was that maybe what had generated such a strong degree of polarization might be the wording of the item, which asked subjects to supply such a rating for "fracking (extraction of natural gas by hydraulic fracturing)."

I figured maybe this language--the sort of "dirty" sounding word "fracking" and the references to "extraction" (sounds like a painful and invasive procedure to subject mother Nature to) &  "natural gas" ("boo" if you have an egalitarian, "game over, capitalists!" sensibility; yay, if you have an individualist, "yes we can, forever & ever & ever!" one) would be sufficient to alert  the ordinary Americans who made up the sample (most of whom likely wouldn't have been able to define fracking without this clue) that this was an "environmental" issue. That would be enough to enable most of them to locate the issue's position on the "cultural theory of risk" map, particularly if they were above-average in science comprehension and thus especially skilled at fitting information to their cultural identities.

So I thought I'd try an experiment.  Administer the same measure but vary the description of the putative risk source: in one condition, it would be called simply "fracking"; in another, it would be referred to as "shale oil gas production"; and in a third, the risk source would be identified as it was in the earlier survey-- "fracking (extraction of natural gas by hydraulic fracturing.)"

I figured that relative to the third group, those in the first (plain old "fracking") would be less polarized, and those in the second ("shale oil gas production"; sounds harmless!) would be the least agitated of all.

Actually, I was modeling this experiment loosely on  Sinaceur, M., Heath, C. & Cole, S. Emotional and deliberative reactions to a public crisis mad cow disease in France, Psychol Sci 16, 247-254 (2005)), a great study in which the investigators showed that lab subjects formed affect- or emotion-pervaded judgments when evaluating risk information relating to "Mad Cow disease" but formed more analytical, calculative ones when the information referred to either "bovine spongiform encephalopathy (BSE)" or "a variant of Creutzfeldt-Jakob disease (CJD)" instead.

Well, here's what I found:

 


Click on the image for a closer inspection, but basically, the difference in effect associated with the variation in wording, while "in the direction" hypothesized, was way too small for anyone to think it was practically meaningful.

Same thing for the influence of the wording on the interaction between political outlooks (measured with a right-left scale) and science comprehension (measured with a cool composite of substantive knowledge & critical reasoning measures; more on that "tomorrow"): 

So much for that theory.

But I have another one!  

All this agitation about fracking, I'm convinced, is really a battle between those who do & those who don't recognize the supreme value of local democratic decisionmaking!

 

 

Tuesday
May062014

What to think about how "How You Say It" — an empirical study of aporetic judicial reasoning

D. Evans, atop "Aporia," before this year's Kentucky Derby

A "CCP journal club!" report from D. Evans:

"Aporia" is a mode of reasoning that shows the author comprehends “an issue’s intractable complexity.” 

Too often, judicial opinions addressing complex value questions are anything but aporetic. While the public is deeply divided over the issue, judicial opinions often “effect a posture of unqualified, untroubled confidence” in the outcome. This “[h]yperbolic certitude” might undermine the legitimacy of the opinion with the losing side, making it seem as though the decisionmaker was biased or unwilling to recognize the strength of arguments supporting the losing side’s position.

In addressing how courts can assure citizens of the law's neutrality, my CCP colleagues and I have conjectured that judicial decisions might reduce cultural polarization and garner acceptance from the losing side by abandoning the norm of reasoning as if the answer is obvious, indisputable, and certain.

Instead, if a court were to recognize (a) the difficulty (even intractability) of the problem, and (b) the strength of the losing side’s case, perhaps the losers would be more likely to perceive the opinion as a legitimate one; one that took their concerns and arguments deeply into account. If the losing side sees its concerns and arguments were thoroughly considered in the decision, it might also be more open to accepting the arguments that prevailed in the outcome. I have long thought about testing this hypothesis that aporetic reasoning would reduce cultural polarization over a controversial ruling.

So I was really excited to read Rob Robinson’s empirical study on exactly this point: It’s How You Say It – Ameliorating Cultural Cognition of Judicial Rulings Through Aporetic Reasoning.

Robinson's study follows a few others with promising implications for the aporia hypothesis: Tom Tyler's research, described here, finds that public views about the legitimacy of legal authority are influenced by the procedural justice and by the distributive justice of the outcomes, but less affected by the favorability of the outcome. Dan Simon and Nicholas Scurich, Lay Judgmentsof Judicial Decision-Making, have found that people tend to agree more with decisions recognizing good reasons support either side of the case than decisions that only recognize the value of one side's position. They also find that an opinion giving no reasons is more persuasive than one including a single, curt reason. (Simon and Schurich's findings rebuffed a preexisting hypothesis called ‘placebic reasoning’ – that people are more likely to credit decisions or actions when backed by reasons, even if those reasons are entirely redundant (i.e., asking to cut in line for a copy machine was less credible than asking to cut in line for a copy machine and providing a redundant reason, “because I have to make copies.”)).

While these studies support the aporia hypothesis, Robinson is the first (to my knowledge) to frame his testing in terms of aporia, specifically.

Robinson conducted an experiment designed to test how members of the public would react to more and less aporetic versions of a judicial decision contrary to their own position on gay marriage.

The study subjects, 619 individuals representing a mix of university students Amazon’s MTurk workers, were assigned to one of three mirror-image conditions. 

In the “control” condition, subjects read a newspaper article describing a judicial decision that examined whether homosexuality should be recognized as an “immutable” (i.e., unchosen, and unalterable) trait. The story reported the court’s conclusion—either “no,” if subjects said they supported gay marriage; or  “yes,” if they said they did—and nothing more.

In the “monolithic” condition, the article includes a quote from the court’s opinion in which the court defends its reasoning by remarking that that an “objective reading of the evidence leads to no other conclusion.”  The court explains that it is obliged to reject the position supported by the study subject—either that homosexuality  is “ immutable,” in the version of the article shown to gay-marriage supporters; or that it is not, in the version shown to gay-marriage opponents—on the ground that there is “no clear scientific consensus” in favor of that view.

In the “aporetic” condition,  the news story quotes language from the opinion evincing a more nuanced stance.  The quoted language chides one side or the other—either  “those who believe homosexuality is a choice” for “often ignor[ing] evidence [to the contrary]” or  “those who argue sexual orientation is fixed or unchanging” for “often overstat[ing] their case.”  The court nevertheless justifies a ruling in favor of the scolded side on the ground that a court is powerless to deem matters otherwise in the face of uncertain evidence.

Robinson reports that subjects found the court’s reasoning more persuasive in both the “monolithic” and “aporetic” conditions  than in the control. In other words, the subjects were least disappointed by the decision when they were told the court had given an explanation for rejecting their position. 

In the view of the subjects who oppose gay marriage, the aporetic opinion was even more persuasive than the monolithic one.

But for those who support gay marriage, the persuasiveness of the decision did not differ significantly among those assigned to “aporetic” and “monolithic” conditions, respectively.

The mean opponents of same-sex marriage rated their disagreement with three forms of the pro-same-sex marriage decision on a scale of 1 ("extremely agree") to 6 ("extremely disagree"): Control 4.16; Monolothic 4.10; Aporetic 3.53. For opponents of same sex marriage, the monolithic opinion was about .06 less disagreeable than the control, the aporetic one was .6 less disagreeable. The mean supporters of same-sex marriage rated their three forms of the anti-same-sex marriage decision follows: Control 4.58; Monolithic  4.46; Aporetic 4.36. Among supporters of same-sex marriage, the monolithic opinion was about .02 less disagreeable than the control, and the aporetic one was .12 less disagreeable than the control. 

This is a super valuable study!

I particularly liked the way in which Robinson distilled the aporetic reasoning into a few quotes set within the framework of a newspaper article. There is much innovative about his deisgn, and his study makes me eager to design a follow up study along these lines. In thinking about how to do so, I have been pondering several questions about the design of this study:

  • One puzzling aspect of his findings is that supporters of same-sex marriage were overall more negative about all three forms of the opinion ruling against it, and they found the aporetic version only slightly less disagreeable. While the aporetic opinion significantly reduced the extent that opponents of same-sex marriage disagreed with a pro-same-sex marriage decision. (The effect of the aporetic treatment on anti-same sex marriage group's disagreement was -0.592, while the effect of the aporetic treatment on pro-same-sex marriage group's disagreement was only -0.150.)

    Why were supporters of same-sex marriage overall more resistant to crediting the contrary opinion, and why was their disagreement less mitigated by aporia? Robinson states this might be caused by the sample of those who favor same-sex marriage being larger (N pro-same-sex marriage=496, N anti-same-sex= 161). (But the larger sample should supply the more significant result if the phenomenon exists, not the less significant one.) He also posits that the difference in reaction may result from "those who favor gay marriage simply having a stronger reaction to empirical claims regarding immutability than those who are opposed." P. 18.

    It could be the case that supporters of same-sex marriage are categorically more rigid in their position, and less willing to credit a contrary ruling regardless of its reasoning.

    But I'd posit another possible explanation. Perhaps the pro-same-sex marriage group's rigid disagreement relates to their views on the relevance of whether homosexuality is immutable, as opposed to an extra-strong belief that same-sex marriage should be allowed. It seems that there may be many egalitarian individuals like me who think that same-sex marriage should be allowed regardless of whether it is immutable. I think any constitutionally protected individual liberty should be an impermissible basis for discrimination, regardless of whether it is immutable. (Indeed I'm offended by the notion that protection is limited to traits that are predetermined rather than chosen pursuant to constitutionally guaranteed autonomy.). I would be much more persuaded to support regulation of same-sex relationships if it were shown that they caused harm to public welfare: the stability of marriage or childrearing.

    Hence, I wonder whether the extra-strong disagreement with the opinion finding homosexuals are not a protected class may represent disdain of the idea that immutability determines the degree of constitutional protection. This is frustration with the legal standard as opposed to ideology-based cognitive rigidity. For this reason, one of my overarching questions about Robinson's study is whether immutability is the best empirical issue for measuring cultural effects in the same sex marriage debate. I would be inclined to focus on welfare-related empirical questions, such as how same-sex marriage impacts childrearing, a question on which strong cultural effects have been observed.

    Furthermore, because these welfare concerns seem to be more often cited in the public debate as a reason for prohibiting same sex marriage, it seems cultural identity may be more strongly tied to one’s beliefs about these questions than one’s belief about immutability. (While certainly part of the debate about the morality of homosexuality, immutability seems to be cited less often as the public reason for prohibiting same sex marriage.) It seems some might oppose same sex marriage for purported public welfare consequences, regardless of whether sexual orientation is immutable. And as I have described above, some proponents of same-sex marriage might be particularly resentful of a decision based on immutability, as they do not believe this should be a relevant factor. This group might also, while cognitively motivated to support a pro-same-sex marriage ruling, be disinclined to support a ruling that homosexuality is immutable.

My other questions pertain to specific elements of the study's design:

  • Asking for views about same-sex marriage: I wonder whether first asking subjects about their stance on same-sex marriage makes them less susceptible to being persuaded by the aporetic reasoning we are testing. Because people don’t want to be inconsistent—either internally or be perceived as such by those conducting the survey—they might resist crediting the ruling after reporting disagreement with its conclusion at the outset of the study, regardless of whether the find the aporetic or monolithic reasoning persuasive. It seems the cultural measures provide enough information to predict a subject’s likely orientation on same-sex marriage, and it is unnecessary to ask subjects about the issue being studied.        
  • Assignment to conditions with which subjects are inclined to disagree: I also question the decision to only show subjects opinions with which they are inclined to disagree. It seems to me that a study of this nature should measure the reasoning’s persuasiveness to both those inclined to disagree with it and those inclined to agree with it. It may be that an aporetic opinion is more persuasive to those inclined to disagree, it is less persuasive to those inclined to agree. It seems this, too, would be a noteworthy finding. The question should be whether opposing cultural groups converge on the persuasiveness of an aporetic opinion more than they do on a monolithic one.        
  • Focus on whether the opinion is persuasive rather than correct: I would not focus on asking subjects whether the court’s conclusion is correct or accurately reflects scientific findings, but whether they find the opinion persuasive. Subjects might agree with the court’s conclusion or believe that it accurately states scientific research, but find its reasoning unpersuasive. Or to the contrary, they might disagree with the court’s scientific conclusion, but find the reasoning persuasive.        
  • More detailed reasoning: I might consider including a few more sentences so that the court’s reasoning more clearly pronounces three elements that I associate with aporia: (a) noting that this is a difficult, perhaps intractable, question, on which there may be no correct answer; (b) saying the evidence is unclear, and presents the strongest points in favor of each side; and (c) gives reasons for crediting one side’s position despite this empirical uncertainty. (I think this last point is the most contentious aspect of aporia – a court must justify its conclusion after admitting that it is uncertain as to the evidence – and it would be particularly interesting to test.). The monolithic condition would do the opposite--e.g., (a) state that the question is simple with a clear right answer; (b) say the evidence is clear or unequivocal; and (c) hold that there's no way one could reach a different result based on the evidence before the court.         
  • Singling out one side: The aporetic versions in Robinson's study single out one side (The unprotected class version begins: “Those who believe homosexuality is a choice often ignore evidence [to the contrary]”; and protected class begins: “Those who argue sexual orientation is fixed or unchanging often overstate their case.”). In contrast, the monolithic condition does not single out one side in this way, but states: “There is no scientific consensus. . . .” I wonder whether statements that the winning parties “overstate” their case or “ignore” evidence are necessary to the aporetic reasoning. It seems that, for the sake of maintaining the highest degree of similarity between conditions, the aporetic opinion should simply say “The evidence is uncertain as to whether. . . .” Aside from uniformity, I am concerned is that these words might be read as accusing the prevailing side of being disingenuous. One party overstating its case has nothing to do with the court’s aporetic reasoning, but it could heighten the losing side’s suspicion for the winning side’s claims.  
  • Explaining what’s at stake before the aporia manipulation: The prompt in this survey tells subjects that immutability determines the degree of constitutional protection afforded same-sex couples, but it does not explicitly say that the degree of constitutional protection determines whether laws prohibiting same-sex marriage are constitutional. It seems this connection—immutability effectively determines the constitutionality of laws prohibiting same-sex marriage—should be made explicit before the aporetic statement about immutability. It seems that priming readers with the cultural significance of the court’s reasoning about immutability would enhance the tendency to engage in motivated reasoning, and this would increase the effects we’d expect to see.        

In raising these questions, I do not mean to undermine the value of Robinson’s study. To the contrary, I find it very valuable. Not only is it encouraging in that it suggests this question is worth studying further, it also supplies an inspiring baseline for designing another study on this subject.

Friday
May022014

The fractal nature of the "knowledge deficit" hypothesis: Biases & heuristics, system 1 & 2, and cultural cognition

I often get asked—in correspondence, in Q&A after talks, in chance encounters with strangers while using one or another mode of public transportation—what the connection is between “cultural cognition” and “all that heuristics and biases stuff” or some equivalent characterization of the work, most prominently associated with Nobelist Daniel Kahneman, on the contribution that automatic, largely unconscious mechanisms of cognition make to risk perception.  

This excerpt, from Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition,  Law & Human Behavior 34, 501-516,  (2010), furnishes half the answer.  

The basic idea is that cultural cognition is not an alternative to the “heuristics and biases” position but a supplement that helps explain how one and the same mechanism—“the availability effect,” “biased assimilation,” “probability neglect” etc.—can generate systematically opposing risk perceptions in identifiable groups of people. 

But as I said, this is only half the answer. At the time that CCP researchers did this study, they were carrying out a research project to examine how cultural cognition interacts with heuristic or “System 1” information processing, which as I indicated features automatic, unconscious mechanisms of cognition. 

In a project that we started thereafter, we’ve been examining the connection between cultural cognition and “System 2” reasoning, which involves conscious, analytic forms of information processing.  In particular, we’ve been empirically testing the popular conjecture that disputes over climate change and other politically contested risks reflects the public’s over-reliance on heuristic reasoning

Not so. Cultural cognition captures and redirects conscious, analytical reasoning, too

Tragically, people use their quantitative and critical-reasoning dispositions to fit empirical data and other technically complex forms of evidence to the position that affirm their identities.  As a result, those who are most disposed to use System 2 reasoning are the most polarized

If you are wandering the internet preaching that the climate change controversy is a consequence of public’s over-reliance on “emotion” or “fast, intuitive heuristics” etc etc you are ignoring evidence. It was a very reasonable hypothesis, but you need to update your understanding of what’s going on as new evidence emerges—just as climate scientists do! 

Sometimes I think this account—that the climate change controversy is a consequence of “public irrationality”—is a kind of pernicious story-telling virus that is impervious to treatment with evidence. 

Makes me realize, too, the irony that I am implicitly affirming my adherence to the “knowledge deficit” hypothesis by continually trying to overcome a version of it by simply bombarding propagators of the "System 1 vs. system 2" (or "bounded rationality," "experiential reasoning," "public irrationality" etc.) explanation of conflict over climate change with more and more and more and more empirical evidence that their account is way too simple. 

Life is weird. And interesting.

 

Theoretical Background: Heuristics, Culture, and Risk

The study of risk perception addresses a puzzle. How do people—particularly ordinary citizens who lack not only experience with myriad hazards but also the time and expertise necessary to make sense of complex technical data—form positions on the dangers they face and what they should do about them?

Social psychology has made well-known progress toward answering this question. People (not just lay persons, but quite often experts too) rely on heuristic reasoning to deal with risk and uncertainty generally. They thus employ a range of “mental shortcuts”: when gauging the danger of a putatively hazardous activity (the possession, say, of a handgun, or the use of nuclear power generation), they consult a mental inventory of recalled instances of misfortunes involving it, give special weight to perceived authorities, and steer clear of options that could improve their situation but that also involve the potential to make them worse off than they are at present (“better safe, than sorry”) (Kahneman, Slovic, & Tversky, 1982; Slovic, 2000; Margolis, 1996). They also employ faculties and styles of reasoning—most conspicuously affective ones informed by feelings such as hope and dread, admiration and disgust—that make it possible for them to respond rapidly to perceived exigency (Slovic, Finucane, Peters & MacGregor, 2004).

To be sure, heuristic reasoning of this sort can lead to mistakes, particularly when they crowd out more considered, systematic forms of reasoning (Sunstein, 2005). But they are adaptive in the main (Slovic et al, 2004).

As much as this account has enlarged our knowledge, it remains incomplete. In particular, a theory that focuses only on heuristic reasoning fails to supply a cogent account of the nature of political conflict over risk (Kahan, Slovic, Braman & Gastil, 2006). Citizens disagree, intensely, over a wide range of personal and societal hazards. If the imprecision of heuristic reasoning accounted for such variance, we might expect such disagreements to be randomly distributed across the population or correlated with personal characteristics (education, income, community type, exposure to news of particular hazards, and the like) that either plausibly related to one or another heuristic or that made the need for heuristic reasoning less necessary altogether. By and large, however, this is not the case. Instead, a large portion of the variance in risk perception coheres with membership in groups integral to personal identity, such as race, gender, political party membership, and religious affiliation (e.g. Slovic, 2000, p. 390; Kahan & Braman, 2006). Whether the planet is overheating; whether nuclear wastes can be safely disposed of; whether genetically modified foods are bad for human health—these are cultural issues in American society every bit as much as whether women should be allowed to have abortions and men should be allowed to marry other men (Kahan, 2007). Indeed, as unmistakably cultural in nature as these latter disputes are, public debate over them often features competing claims about societal risks and benefits, and not merely competing values (e.g. Siegel, 2007; Pollock, 2005).

This is the part of the risk-perception puzzle that the cultural theory of risk is distinctively concerned with (Douglass & Wildavsky, 1982). According to that theory, individuals conform their perceptions of risk to their cultural evaluations of putatively dangerous activities and the policies for regulating them. Thus, persons who subscribe to an “individualist” worldview react dismissively to claims of environmental and technological risks, societal recognition of which would threaten markets and other forms of private ordering. Persons attracted to “egalitarian” and “communitarian” worldviews, in contrast, readily credit claims of environmental risk: they find it congenial to believe that commerce and industry, activities they associate inequity and selfishness, cause societal harm. Precisely because the assertion that such activities cause harm impugns the authority of social elites, individuals of a “hierarchical worldview” are (in this case, like individualists) risk skeptical (Rayner, 1992).

Researchers have furnished a considerable body of empirical support for these patterns of risk perception (Dake, 1991; Jenkins-Smith, 2001; Ellis & Thompson, 1997; Peters & Slovic, 1996; Peters, Burriston & Mertz, 2004; Kahan, Braman, Gastil, Slovic & Mertz, 2007). Such studies have found that cultural worldviews explain variance more powerfully than myriad other characteristics, including socio-economic status, education, and political ideology, and can interact with and reinforce the effect of related sources of identity such as race and gender.

Although one could see a rivalry between culture theory and the heuristic model (Marris, Langford, O’Riordan 1998; Douglas, 1997), it is unnecessary to view them as mutually exclusive. Indeed, one conception of the cultural theory—which we will call the cultural cognition thesis ((Kahan, Braman, Monahan, Callahan & Peters, in press; Kahan, Slovic, Braman & Gastil, 2006)—seeks to integrate them. Culture theorists have had relatively little to say about exactly how culture shapes perceptions of risk.[i] Cultural cognition posits that the connection is supplied by conventional heuristic processes, or at least some subset of them (DiMaggio, 1997). On this account, heuristic mechanisms interact with cultural values: People notice, assign significance to, and recall the instances of misfortune that fit their values; they trust the experts whose cultural outlooks match their own; they define the contingencies that make them worse off, or count as losses, with reference to culturally valued states of affairs; they react affectively toward risk on the basis of emotions that are themselves conditioned by cultural appraisals—and so forth. By supplying this account of the mechanisms through which culture shapes risk perceptions, cultural cognition not only helps to fill a lacuna in the cultural theory of risk. It also helps to complete the heuristic model by showing how one and the same heuristic process (whether availability, credibility, loss aversion, or affect) can generate different perceptions of risk in people with opposing outlooks.

The proposition that moral evaluations of conduct shape the perceived consequences of such conduct is not unique to the cultural cognition thesis. Experimental study, for example, shows that negative affective responses mediate between moral condemnation of “taboo” behaviors and perceptions that those behaviors are harmful (Gutierrez & Giner-Sorolla, 2007). The same conclusion is also supported by a number of correlational studies (Horvath & Giner-Sorolla, 2007; Haidt & Hersh, 2001). The point of contact that the cultural cognition thesis, if demonstrated, would establish between cultural theory and these other works in morally motivated cognition would also lend strength to the psychological foundation of the former’s account of the origins of risk perceptions.

 

 


[i] For functionalist accounts, in which individuals are seen as forming risk perceptions congenial to their ways of life precisely because holding those beliefs about risk cohere with and promote their ways of life, see Douglas (1986) and Thompson, Ellis & Wildavsky (1990).

Monday
Apr282014

Science and public policy: Who distrusts whom about what?

More or less what I said at really great NSF-sponsored "trust" workshop at University of Nebraska this weekend. Slides here

1.  What public distrust of science?

I want to address the relationship of trust to the science communication problem.

As I use the term, “the science communication problem” refers to the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent cultural conflict over risks or other policy-relevant facts to which that evidence directly speaks. 

The climate change debate is the most spectacular current example, but it is not the only instance of the science communication problem. Historically, public controversy over the safety of nuclear power fit this description. Another contemporary example is the political dispute over the risks and benefits of the HPV vaccine.

Distrust of science is a common explanation for the science communication problem. The authority of science, it is asserted, is in decline, particularly among individuals of a relatively “conservative” political outlook.

This is an empirical claim.  What evidence is there for believing that the public trusts scientists or scientific knowledge less today than it once did? 

The NSF, which is sponsoring this very informative conference, has been compiling evidence on public attitudes toward science for quite some time as part of its annual Science Indicators series.

One measure of how the pubic regards science is its expressed support for federal funding of scientific research.  In 1985, the public supported federal science funding by a margin of about 80% to 20%. Today the margin in the same—as it was at every point between then and now.

Back in 1981, the proportion of the public who thought that the government was spending too little to support scientific research outnumbered the proportion who thought that the government was spending too much by a margin of 3:2. 

Today around four times as many people say the government is spending too little on scientific research than say it is spending too much.

Yes, there is mounting congressional resistance to funding science in the U.S.--but that's not because of any creeping "anti-science" sensibility in the U.S. public. 

Still aren't sure about that?

Well, how would you feel if your child told you he or she was marrying a scientist? About 70% of the public in 1983 said that would make them happy.  The proportion who said that grew to 80% by 2001, and grew another 5% or so in the last decade.

Are “scientists … helping to solve challenging problems”? Are they “dedicated people who work for the good of humanity”?

About 90% of Americans say yes.

Do you think you can squeeze the 75% of Republicans who say they “don’t believe in human-caused climate change” from the remainder? Better double check your math.

In sum, there isn’t any evidence that creeping distrust in science explains the science communication problem, because there’s no evidence either that Americans don’t trust scientists or that fewer of them trust them now than in the past.

Of course, if you like, you can treat the science communication problem itself as proof of such distrust.  Necessarily, you might say, the public distrusts scientists if members of the public are in conflict over matters on which scientists aren’t.

But then the “public distrust in science” explanation becomes analytic rather than empirical.  It becomes, in other words, not an explanation for the science communication problem but a restatement of it.

If we want to identify the source of the science communication problem, simply defining the problem as a form of “public distrust” in science—on top of being a weird thing to do, given the abundant evidence that the American public reveres science and scientists—necessarily fails to tell us what we are interested in figuring out, and confuses a lot of people who want to make things better.

2. The impact of cultural distrust on perceptions of what scientists believe

So rather than define the science communication problem as evincing “public distrust in science,” I’m going to offer an evidence-based assessment of its cause.

A premise of this explanation, in fact, is that the public does trust science.

As reflected in the sorts of attitudinal items in the NSF indicators and other sources, members of the public in the U.S. overwhelmingly recognize the authority of science and agree that the individual and collective decisionmaking should be informed by the best available scientific evidence.

But diverse members of the public, I’ll argue, distrust one another when they perceive that the status of the cultural groups they belong to are being adjudicated by the state’s adoption of a policy or law premised on a disputed risk or comparable fact.

When risks and other facts that admit of scientific investigation become the focus of cultural status competition, members of opposing groups will be unconsciously motivated to construe all manner of evidence in a manner that reinforces their commitment to the positions that predominate within their respective groups.

One source of evidence—indeed, the most important one—will be the weight of opinion among expert scientists.

As a result, culturally diverse people, all of whom trust scientists but who disagree with one another’s intentions on policy issues that have come to symbolize clashing worldviews, will end up culturally polarized over what scientists believe about the factual presuppositions of each other's position.

That is the science communication problem.

I will present evidence from two (NSF-funded!) studies that support this account.

3.  Cultural cognition of scientific consensus

The first was an experiment on how cultural cognition influences perceptions of scientific consensus on climate change, nuclear waste disposal, and the effect of “concealed carry” laws.

The cultural cognition thesis holds that individuals can be expected to form perceptions of risk and like facts that reflect and reinforce their commitment to identity-defining affinity groups.

For the most part, Individuals have a bigger stake in forming identity-congruent beliefs on societal risks than they have in forming best-evidence-congruent ones. If a person makes a mistake about the best evidence on climate change, for example, that won’t affect the risk that that individual or anyone he or she cares about faces: as a solitary individual, that person’s behavior (as consumer, voter, etc.) is too inconsequential to have an impact.

But if that person makes a “mistake” in relation to the view that dominates in his or her affinity group, the consequences could be quite dire indeed.  Given what climate change beliefs now signify about one’s group membership and loyalties, someone who forms a culturally non-conformity view risks estrangement from those on whose good opinion that person’s welfare—material and emotional—depends.

It is perfectly rational, in these circumstances, for individuals to engage information in a manner that reliably connects their beliefs to their cultural identities than to the best scientific evidence. Indeed, experimental evidence suggests that the more proficient that person’s critical reasoning capacities, the more successful he or she will be in fitting all manner of evidence to the position that expresses his or her group identity.

What most scientists in a particular field believe is one such form of evidence.  So we hypothesized that culturally diverse individuals would construe evidence of what experts believe in a biased fashion supportive of the position that predominates in their respective groups.

In the experiment, we showed study subjects the pictures and resumes of three highly credentialed scientists and asked whether they were “experts” (as one could reasonably have inferred from their training and academic posts) in the domains of climate change, nuclear power, and gun control.

Half the subjects were shown a book expert in which the featured scientist took the “high risk” position on the relevant issue (“scientific consensus that humans are causing climate change”; “deep geologic isolation of nuclear wastes is extremely hazardous”; “permitting citizens to carry concealed guns in public increases crime”), and half a book excerpt in which the same scientist too the “low risk” position (“evidence on climate change inconclusive”; “deep geologic isolation of nuclear wastes poses no serious hazards”; “allowing citizens to carry concealed guns reduces crime”).

If the featured scientist’s view matched the one dominant in a subject’s cultural group, the subject was highly likely to deem that scientists an “expert” whose views a reasonable citizen would take into account. 

But if that same scientist was depicted as taking the position contrary to the one that was dominant in the subject’s group, then she was highly likely to perceive that the scientist lacked expertise on the subject in question.

This result was consistent with our hypotheses.

If individuals in the real-world selectively credit or discredit evidence on “what experts believe” in this manner, then individuals of diverse cultural outlooks will end up polarized on what scientific consensus is.

And this is exactly the case.  In an observational component of the study, we found that the vast majority of subjects perceived “scientific consensus” to be consistent with the position that was dominant among members of their respective cultural groups.

Judged in relation to National Academy of Sciences “expert consensus” reports, moreover, all of the opposing cultural groups turned out to be equally bad in discerning what the weight of scientific opinion was across these three issues.

In sum, they all agreed that policy should be informed by the weight of expert scientific opinion. 

But because policies in question turned on disputed facts symbolically associated with membership in opposing groups, they were motivated by identity-protective cognition to assess evidence of what scientists believe in a biased fashion.

4.  The cultural credibility heuristic

The second study involved perceptions of the risks and benefits of the HPV vaccine.

The CDC’s 2006 recommendation that the vaccine be added to the schedule of immunizations required as a condition of middle school enrollment, although only for girls, provoked intense political controversy across the U.S. in the years immediately thereafter.

In our study, we found that there was very mild cultural polarization on the safety of the HPV vaccine among subjects’ whose views were solicited in a survey.

The degree of cultural polarization was substantially more pronounced, however, among subjects who were first supplied with balanced information on the vaccines’ potential risks and expected benefits.  Consistent with the cultural cognition thesis, the subjects were selectively crediting and discrediting the information we supplied in patterns that reflected their stake in forming identity-supportive beliefs.

But still another group of subjects assessed the risks and benefits of the HPV vaccine after being furnished the same information from debating “public health experts.” These “experts” were ones whose appearances and backgrounds, a separate pretest had shown, would induce study subjects to competing cultural identities to them.

In this experiment condition, subjects’ assessments of the risks and benefits of the HPV vaccine turned decisively on the degrees of affinity between the perceived cultural identities of the experts and the study subjects’ own identities.

If subjects observed the position that they were culturally predisposed to accept being advanced by the “expert” they were likely to perceive as having values akin to theirs, and the position they were predisposed to reject being advanced by the “expert” they were likely to perceive as having values alien to their own, then polarization was amplified all the more.

But where subjects saw the expert they were likely to perceive as sharing their values advancing the position they were predisposed to reject, and the expert they were likely to perceive as holding alien values advancing the position they were predisposed to accept, subjects of diverse cultural identities flipped positions entirely.

The subjects, then, trusted the scientific experts.

Indeed, polarization disappeared when experts whom culturally diverse subjects trusted told them the position they were predisposed to accept was wrong.

But the subjects remained predisposed to construe information in a manner protective of their cultural identities.

As a result, when they were furnished tacit cues that opposing positions on the HPV vaccine risks corresponded to membership in competing cultural groups, they credited the expert whose values they tacitly perceived as closest to their own—a result that intensified polarization when subjects' predispositions were reinforced by those cues.

5.  A prescription

The practical upshot of these studies is straightforward.

To translate public trust in science into convergence on science-informed policy, it is essential to protect decision-relevant science from entanglement in culturally antagonistic meanings.

No risk issue is necessarily constrained to take on such meanings

There was nothing inevitable, for example, about the HPV vaccine becoming a focus of cultural status conflict.  It could easily, instead, have been assimilated uneventfully into public health practice in the same manner as the HBV vaccine.  Like the HPV vaccine, the HBV vaccine immunizes recipients against a sexually transmitted disease (hepatitis-b), was recommended for universal adolescent vaccination by the CDC, and thereafter was added to the school-enrollment schedules of nearly every state.

The HBV vaccine had uptake rates of over 90% during the years in which the safety of the HPV vaccine was a matter of intense, and intensely polarizing, political controversy in the U.S.

The reason HPV ended up becoming suffused with antagonistic cultural meanings had to do with ill-advised decisions, pushed for by the vaccine’s manufacturer and acquiesced in without protest by the FDA, that made it certain that members of the public would learn about the vaccine for the first time not from their pediatricians, as they had with the HBV vaccine, but from news reports on the controversy occasioned by a high-profile, nationwide campaign to secure legislative enactments of a “girls’ only STD shot” as a condition of school enrollment.

The risks associated with introducing the HPV vaccine in this manner were not only foreseeable but foreseen and even empirically studied at the time.

Warnings about this danger were not so much rejected as never considered—because there is no mechanism in place in the regulatory process for assessing how science-informed policymaking interacts with cultural meanings.

The U.S. is a pro-science culture to its core.

But it lacks a commitment to evidence-based methods and procedures for assuring that what is known to science becomes known to those whose decisions, individual and collective, it can profitably inform.

The “declining trust in science” trope is itself a manifestation of our evidence-free science communication culture.

Those who want to solve the science communication problem should resist this & all the other just-so stories that are offered as explanations of it.

They should also steer clear of those drawn to the playground-quality political discourse that features competing tallies of whose “side” is “more anti-science.”

And they should instead combine their energies to the development of a new political science of science communication that reflects an appropriately evidence-based orientation toward the challenge of enabling the members of a pluralistic liberal society to reliably recognize what’s known by science.

 

 

Thursday
Apr242014

Still more evidence of my preternatural ability to change people's minds: my refutation of Krugman's critique of Klein's article convinces Klein that Krugman's critique was right

That's not Harmon Killebrew, is it?! Nah...Huh.

Well, I actually agree 70% w/ what Klein says; once I explain why, I predict Klein will thoughtfully disagree -- and end up more-or-less where I was in my post on Krugman's "symmetry proof."

But I don't have time to go into this now (am busy w/ field experiments aimed at counteracting the motivated reasoning of cultural anti-cat zealots).  Will write something on this "tomorrow." 

In meantime, maybe someone else will explain why I was 100% right (everyone who commented on the Krugman post definitely felt that way).

Wednesday
Apr232014

What you "believe" about climate change doesn't reflect what you know; it expresses *who you are*

More or less the remarks I delivered yesterday at Earthday "Climate teach in/out" at Yale University:

I study risk perception and science communication.

I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. 

It's this:

What people “believe” about global warming doesn’t reflect what they know; it expresses who they are.

Accordingly, if you want to promote constructive public engagement with the best available evidence, you have to change the meaning of the climate change.

You have to disentangle positions on it from opposing cultural identities, so that people aren't put to a choice between freely appraising the evidence and being loyal to their defining commitments.

I’ll elaborate, but for a second just forget climate change, and consider another culturally polarizing science issue: evolution.

About every two years, a major polling organization like Gallup issues a public opinion survey showing that approximately 50% of Americans “don’t believe in evolution.” 

Pollsters issue these surveys at two-year intervals because apparently that’s how long it takes people to forget that they’ve already been told this dozens of times.  Or in any case, every time such a poll is released, the media and blogosphere is filled with expressions of shock, incomprehension, and dismay.

“What the hell is wrong with our society’s science education system?,” the hand-wringing, hair-pulling commentators ask.

Well, no doubt a lot.

But if you think the proportion of survey respondents who say they “believe in evolution” is an indicator of the quality of the science education that people are receiving in the U.S., you are misinformed.

Do you know what the correlation is between saying “I believe in evolution” and possessing even a basic understanding of “natural selection,” “random mutation,” and “genetic variance”—the core elements of the modern synthesis in evolutionary science?

Zero.

Those who say they “do believe” are no more likely to be able to give a high-school biology-exam-quality account of how evolution works than those who say they “don’t.”

In a controversial decision in 2010, the National Science Foundation in fact proposed removing from its standard science-literacy test the true-false question “human beings developed from an earlier species of animals.”

The reason is that giving the correct answer to that question doesn’t cohere with giving the right answer to the other questions in NSF’s science-literacy inventory.

What that tells you, if you understand test-question validity, is that the evolution item isn’t measuring the same thing as the other science-literacy items.

Answers to those other questions do cohere with one another, which is how one can be confident they are all validly and reliably measuring how much science knowledge that person has acquired.

But what the NSF “evolution” item is measuring, researchers have concluded, is test takers’ cultural identities, and in particular the significance of religiosity in their lives.

What’s more, the impact of science literacy on the likelihood that people will say they “believe in evolution” is in fact highly conditional on their identity: as their level of science comprehension increases, individuals with a highly secular identity become more likely to say “they believe” in evolution; but as those with a highly religious identity become more science literate, in contrast, they become even more likely to say they don’t.

What you “believe” about evolution, in sum, does not reflect what you know about science—in general, or in regard to the natural history of human beings.

Rather it expresses who you are.

Okay, well, exactly the same thing is true on climate change.

You’ve all seen the polls, I’m sure, showing the astonishing degree of political polarization on “belief” “human-caused” global warming.

Well, a Pew Poll last spring asked a nationally represented sample, “What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it carbon dioxide, hydrogen, helium, or radon?”

Approximately 60% got the right answer to that question.

And there was zero correlation between getting it right and being a Democrat or Republican.

The percentage of Democrats who say they “believe” in global warming is substantially higher than 65%: it’s over 80%, which means that a good number of Democrats who say they “believe” in global warming don’t understand the most basic of all facts known to climate science.

The percentage of Republicans who say they don’t believe in global warming is a lot lower than 65%. Only about 25% say they believe human beings have caused global temperatures to rise in recent decades, according to Pew and other researchers. 

That means that a large fraction of the Republicans who tell pollsters they “don’t believe” in human-caused global warming do in fact know the most important thing there is to understand about climate change: that adding carbon to the atmosphere causes the temperature of the earth to increase.

Do you know what the correlation is between science literacy and “belief” in human-caused global warming?

You get half credit for saying zero.

That’s the right answer for a nationally representative sample as a whole.

But it’s a mistake to answer the question without dividing the sample up along cultural or comparable lines: as their score on one or another measure of science comprehension goes up, Democrats become more likely, and Republicans less, to say they “believe” in human-caused global warming.

Like saying “I do/don’t believe in evolution,” saying I “do/don’t believe in climate change” doesn’t convey what you know about science—generally, or in relation to the climate.

It expresses who you are.

Al Gore has described the climate change debate as a “struggle for the soul of America.

He’s right.

But that’s exactly the problem.  Because in “battles for the soul” of America, the stake that culturally diverse individual have in forming beliefs consistent with their group identity dominates the stake they have in forming beliefs that fit the best available evidence.

In saying that, moreover, I’m not talking about whatever interest people have in securing comfortable accommodations in the afterlife. I’m focused entirely on the here and now.

Look: What an ordinary individual believes about the “facts” on climate change has no impact on the climate.

What he or she does as a consumer, as a voter, or as a participant in public debate is just too inconsequential to have an impact.

No mistake that individual makes about the science on climate change, then, is going to affect the risk posed by global warming for him or her or for anyone else that person cares about.

But if he or she takes the “wrong” position in relation to his or her cultural group, the result could be devastating for her, given what climate change now signifies about one’s membership in and loyalty to opposing cultural groups.

It could drive a wedge—material, emotional, and psychological—between individual the people whose support are indispensable to his or her well-being.

In these circumstances, we should expect a rational person to engage information in a manner geared to forming and persisting in positions that are dominant within their cultural groups. And the better they are at making sense of complex information—the more science comprehending they are –the better they’ll do at that. 

That’s what we see in lab experiments.  And it’s why we see polarization on global warming intensifying in step with science literacy in the real world.

But while that’s the rational way for people to engage information as individuals, given what climate change signifies about their cultural identities, it’s a disaster for them collectively.  Because if everyone does this at the same time, members of a culturally diverse democratic society are less likely to converge on scientific evidence that is crucial to the welfare of all of them.

And yet that by itself doesn’t make it any less rational for individuals to attend to information in a manner that reliably connects them to the position that is dominant in their group.

This is a tragedy of the commons problem—a tragedy of the science communications commons.

If we want to overcome it, then we must disentangle competing positions on climate change from opposing cultural identities, so that culturally pluralistic citizens aren’t put in the position of having to choose between knowing what’s known to science and being who they are.

Only that will dissolve the conflict citizens now face between their personal incentive to form identity-consistent beliefs and the collective one they have in recognizing and giving effect to the best available evidence.

Science educators, by the way, have already figured this out about evolution. They’ve shown you can in fact teach the elements of the modern synthesis-- random mutation, genetic variance and natural selection—just as readily to students whose identities cohere with saying they “don’t believe” in evolution as you can to students whose identities cohere with saying they do. You just can’t expect the former to “I believe in evolution” after.

Indeed, you must take pains not to confuse understanding evolutionary science with the “pledge of cultural allegiance” that “I believe in evolution” has become.

You must remove from the education environment the toxic cultural meanings that make answers to that question badges of membership in and loyalty to one’s cultural group.  The meanings that fuel the pathetic spectacle of hand-wringing and hair-pulling that occurs every time Gallup or another organization issues its “do you believe in evolution” survey results.

All the diverse groups that make up our pluralistic democracy are amply stocked with science knowledge.

They are amply stocked with public spirit too. 

That means you, as a science communicator, can enable these citizens to converge on the best available evidence on climate change.

But to do it, you must banish from the science communication environment the culturally antagonistic meanings with which positions on that issue have become entangled—so that citizens can think and reason for themselves free of the distorting impact of identity-protective cognition.

If you want to know what that sort of science communication environment looks like, I can tell you where you can see it: in Florida, where all 7 members of the Monroe County Board of Commissioners -- 4 Democrats, 3 Republicans -- voted unanimously to join Broward County (predominantly Democratic), Monroe County (predominantly Republican), and Miami-Dade County (predominantly Republican) in approving the Southeast Climate Compact Action plan, which, I quote from the Palm Beach County Board summary, “includes 110 adaptation and mitigation strategies for addressing seal-level risk and other climate issues within the region.”

I’ll tell you another thing about what you’ll see if you make this trip: the culturally pluralistic, and effective form of science communication happening in southeast Florida doesn’t look anything  like the culturally assaultive "us-vs-them" YouTube videos and prefabricated internet comments with which Climate Reality and Organizing for American are flooding national discourse.

And if you want to improve public engagement with climate science in the United States, the fact that advocates as high profile and as highly funded as that still haven’t figured out the single most important lesson to be learned from the science of science communication should make you very sad.

Sunday
Apr202014

No, I don't think "cultural cognition is a bad thing"; I think a *polluted science communication environment* is & we should be using genuine evidence-based field communication to address the problem

Stenton Benjamin Danielson has a characteristically thoughtful post, 95% of which I agree with, on cultural cognition, "public opinion," and promoting constructive public engagement with climate science.  But of course the 5%-- which has to do with whether I think "cultural cognition" is a "bad thing" that is to be overcome rather than a dynamic to be deployed to promote such engagement -- sticks in my craw!  Maybe this response will get us closer to 100% agreement--if not by moving him a full 5% in my direction, then maybe by  provoking him to elaborate & thereby move me some fraction of the remainder toward his point of view.

 So read what he says.  Then read this:

Part of the problem, I'm sure, is that I'm an imperfect communicator.

Another is the infeasibility of saying everything one believes every time one says anything.

But it is simply not the case that I view

cultural cognition as unreservedly bad -- a sort of disease or pollution in our debate about an issue, something to be prevented or neutralized whenever possible so that we can make rational assessments of the evidence.

On the contrary, I view it is an indispensable element of rational thought, one that contributes in a fundamental way to the capacity of individuals to participate in, and thus extend, collective knowledge. See generally:  

Cultural cognition conduces to persistent states of public controversy over what's known only in a polluted science communication environment: one in which antagonistic cultural meanings become attached to positions on risk and policy-relevant facts, and transform them into badges of membership in opposing cultural groups.  

That's not normal.  It is a pathology that disables rational thought precisely because it disconnects cultural cognition from discernment of the best available evidence.

We can treat this pathology, and better still avoid the occurrence of it, through evidence-based science-communication-environment protection practices

See generally:  

I also agree, by the way, that "messaging" campaigns aimed at influencing "public opinion" generally are an absurd waste of time, not to mention waste of the money of those eager to support climate-science communication efforts.  This approach to "science communication" not only reflects a psychologically unrealistic account of how people come to know what's known by science but betrays an elementary-school level of comprehension of basic principles of political economy

Don't "message" people with "struggle for the soul of America" appeals. 

Show them that engaging climate science is "normal" by enabling them to see that people they recognize as competent and informed are using it to guide their practical decisions.  That is how ordinary people -- very rationally -- recognize how to orient themselves appropriately with the best available evidence on all manner of issues. 

Understanding the contribution that cultural cognition makes to individuals' rational apprehension of what is known is, I believe, is indispensable to that strategy for promoting constructive public engagement with climate science.  I'm glad to see that you agree with me on that -- even if you hadn't discerned that I agree with you! 

Those "risk experts" who want to contribute, moreover, should stop telling just-so stories-- give up the facile "take-'biases'-&-'heuristics'-literature-add-water-&-stir" form of "instant decision science"-- and go to the places where real people are trying to figure out how to use climate science to make their lives better.

Go there and genuinely help them by systematically testing their experience-informed hypotheses about how to reproduce in the world the sorts of things that experimental methods using cultural cognition and other theories suggest will improve public engagement with climate science.

We don't need more stylized lab experiments that try to convince us that things that real-world evidence manifestly show won't work actually will if we just keep doing them (followed when they don't by whinging about "the forces of evil" who--as was perfectly foreseeable--told members of the public whom you were targetting not to believe your "message").

Climate scientists update their models to reflect ten years of data.  Climate advocates should too.  

 

 

Friday
Apr182014

Want to improve climate-science communication (I mean really, seriously)? Stop telling just-so stories & conducting "messaging" experiments on MTurk workers & female NYU undergraduates & use genuine evidence-based methods in field settings instead

From Kahan, D., "Making Climate Science Communication Evidence-based—All the Way Down," in Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow, pp. 203-21. (Routledge Press, 2014):

a. Methods. In my view, both making use of and enlarging our knowledge of climate science communication requires making a transition from lab models to field experiments. The research that I adverted to on strategies for counteracting motivated reasoning consist of simplified and stylized experiments administered face-to-face or on-line to general population samples. The best studies build explicitly on previous research—much of it also consisting in stylized experiments—that have generated information about the nature of the motivating group dispositions and the specific cognitive mechanisms through which they operate. They then formulate and test conjectures about how devices already familiar to decision science—including message framing, in-group information sources, identity-affirmation, and narrative—might be adapted to avoid triggering these mechanisms when communicating with these groups.[1]

But such studies do not in themselves generate useable communication materials. They are only models of how materials that reflect their essential characteristics might work. Experimental models of this type play a critical role in the advancement of science communication knowledge: by silencing the cacophony of real-world influences that operate independently of anyone’s control, they make it possible for researchers to isolate and manipulate mechanisms of interest, and thus draw confident inferences about their significance, or lack thereof. They are thus ideally suited to reducing the class of the merely plausible strategies to ones that communicators can have an empirically justified conviction are likely to have an impact. But one can’t then take the stimulus materials used in such experiments and send them to people in the mail or show them on television and imagine that they will have an effect.

Communicators are relying on a bad model if they expect lab researchers to supply them with a bounty of ready-to use strategies. The researchers have furnished them something else: a reliable map of where to look for them. Such a map will (it is hoped) spare the communicators from wasting their time searching for nonexistent buried treasure. But the communicators will still have to dig, making and acting on informed judgments about what sorts of real materials they believe might reproduce these effects outside the lab in the real-world contexts in which they are working.

The communicators, moreover, are the only ones who can competently direct this reproduction effort. The science communication researchers who constructed the models can’t just tell them what to do because they don’t know enough about the critical details of the communication environment: who the relevant players are, what their stakes and interests might be, how they talk to each other, and whom they listen to. If researchers nevertheless accept the invitation to give “how to” advice, the best they will be able to manage are banalities—“Know your audience!”; “Grab the audience’s attention!”—along with Goldilocks admonitions such as, “Use vivid images, because people engage information with their emotions. . . but beware of appealing too much to emotion, because people become numb and shut down when they are overwhelmed with alarming images!”

Communicators possess knowledge of all the messy particulars that researchers not only didn’t need to understand but were obliged to abstract away from in constructing their models . Indeed, like all smart and practical people, the communicators are filled with many plausible ideas about how to proceed—more than they have the time and resources to implement, and many of which are not compatible with one another anyway. What experimental models—if constructed appropriately—can tell them is which of their surmises rest on empirically sound presuppositions and which do not. Exposure to the information such modeling yields will activate experienced-informed imagination on the communicators’ part, and enable them to make evidence-based judgments about which strategies they believe are most likely to work for their particular problem.

At that point, it is time for the scientist of science communication to step back in—or to join alongside the communicator. The communicator’s informed conjecture is now a hypothesis to be tested. In advising field communicators, science of science communication researchers should treat what the communicators do as experiments. Science communication researchers should work with the communicator to structure their communication strategies in a manner that yields valid observations that can be measured and analyzed.

Indeed, communicators, with or without the advice of science of science communication researchers, should not just go on blind instinct. They shouldn’t just read a few studies, translate them into a plausible-sounding plans of action, and then wing it. Their plausible surmises about what will work will be more plausible, more likely to work, than any that the laboratory researchers, indulging their own experience-free imaginations, concoct. But they will still be only plausible surmises. Still be only hypotheses. Without evidence, we will not learn whether policies based on such surmises did or didn’t work. If we don’t learn that, we won’t learn how to do even better.

Genuinely evidence-based science communication must be based on evidence all the way down. Communicators should make themselves aware of the existing empirical information that science communication researchers have generated (and steer clear of the myriad stories that department-store consumers of decision science work tell) about why the public is divided on climate science. They should formulate strategies that seek to reproduce in the world effects that have been shown to help counter the dynamics of motivated reasoning responsible for such division. Then, working with empirical researchers, they should observe and measure. They should collect appropriate forms of pretest or preliminary data to try corroborate that the basis for expecting a strategy to work is sound and to calibrate and refine its elements to maximize its expected effect. They should also collect and analyze data on the actual impact of their strategies once they’ve been deployed.

Finally, they should make the information that they have generated at every step of this process available to others so that they can learn from it to. Every exercise in evidence-based science communication itself generates knowledge. Every such exercise itself furnishes an instructive model of how that knowledge can be intelligently used. The failure to extract and share the intelligence latent in doing science communication perpetuates the dissipation of collective knowledge that it is the mission of the science of science communication to staunch. 

 


[1] Unrepresentative convenience samples are unlikely to generate valid insights on how to counteract motivated reasoning. Samples of college undergraduates are perfectly valid when there is reason to believe the cognitive dynamics involved operate uniformly across the population. But the mechanisms through which motivated reasoning generates polarization on climate change don’t; they interact with diverse characteristics—worldviews and values, but also gender, race, religiosity, and even regions of residence. It is known, for example, that white males who are highly hierarchical and individualistic in worldviews or conservative in their political ideologies, and who are likely to live in the South and far west, tend to react dismissively to information about climate change (McCright & Dunlap 2013, 2012, 2011; Kahan, Braman, Gastil, Slovic & Mertz 2007). Are they likely to respond to a “framing” strategy in the same way that a sample of predominantly female undergraduates attending a school in New York City does (Feygina, Jost & Goldsmith 2010)? If not, that’s a good reason to avoid using such a sample in a framing study, and not to base practical decisions on any study that did.

Thursday
Apr172014

Vaccine risk perceptions and risk communication: study conclusions & recommendations

From CCP's "Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment" report: 

II. Summary conclusions

A. Findings

1.   There is deep and widespread public consensus, even among groups strongly divided on other issues such as climate change and evolution, that childhood vaccinations make an essential contribution to public health. A very large supermajority believes that the benefits of childhood vaccinations outweigh their risks and that public health generally would suffer were vaccination rates to fall short of the goals set by public health authorities.


2.   In contrast to other disputed science issues, public opinion on the safety and efficacy of childhood vaccines is not meaningfully affected by differences in either science comprehension or religiosity. Public controversies over science, including those over evolution and climate change, often feature conflict among individuals of varying levels of religiosity, whose difference of opinion intensify in proportion to their level of science comprehension. There is no such division over vaccine risks and benefits.

3.   The public’s perception of the risks and benefits of vaccines bears the signature of a generalized affective evaluation, which is positive in a very high proportion of the population. The high degree of coherence in responses to items relating to the contribution that childhood vaccinations make to public health strongly implies that public assessments of vaccine risks and benefits reflect a unitary latent affective orientation. The distribution of that orientation is strongly skewed in a positive direction—indicating that a substantial majority of the population (in the vicinity of 75%) has a positive attitude toward childhood vaccines.

4.   Among the manifestations of the public’s positive orientation toward childhood vaccines is the perception that vaccine benefits predominate over vaccine risks and a high degree of confidence in the judgment of public health officials and experts. By large supermajorities, the survey participants endorsed the proposition that vaccine benefits outweigh their risks, and rejected claims that deterioration in vaccination coverage would pose no serious public health danger. They also expressed confidence in the judgment of officials who identify which vaccinations should be universally administered, and in the judgment of experts that vaccines are safe.

5.   Perceptions of the relationship between vaccines and specified diseases reflect the same positive affective orientation that informs public perceptions of the contribution that childhood vaccines make to public health generally. Responses to items on the link between vaccines and autism, cancer, diabetes—as well as a fictional disease not asserted by anyone to be connected to childhood vaccinations—displayed the same pattern as the responses to all the other public-health items. Under these circumstances, responses to these items can confidently be viewed only as indicators of the same latent affective attitude reflected in the public’s assessments of the contribution childhood vaccines make to public health generally. Public health officials should resist the mistake of construing responses to survey items such as these as measuring public knowledge about or beliefs on specific issues relating to childhood vaccinations.

6.   The demographic characteristics and political outlooks typically associated with group conflict over risk and related aspects of decision-relevant science are not meaningfully associated with disagreement about childhood-vaccination risks. Members of all such groups believe that vaccine risks are low, vaccine benefits high, and mandatory vaccination policies appropriate. Those who believe otherwise are outliers in every one of these groups.

7.   There is no meaningful association between concern over vaccine risks and the sharp cultural cleavage that characterizes perceptions of either “public safety risks,” a cluster of putative hazards associated with environmental issues and gun control, or “social deviancy risks,” a cluster associated with legalization of marijuana and prostitution and with teaching high school students about birth control. The opposing cultural allegiances that are associated with disputed societal and public health risks do not generate meaningful disagreement over vaccine risks and benefits. At most, such dispositions mildly influence the intensity with which culturally diverse members of the public approve of childhood vaccination.

8.   Existing universal vaccination policies appear to enjoy widespread support, but proposals to restrict existing grounds for exemption divide the public along partisan lines. Despite support for universal vaccination policies and widespread disapproval of parents who refuse to permit vaccination of their children based on concerns about vaccine risks, proposals to restrict or eliminate moral or religious grounds for opting out of vaccination requirements provoke dissensus along largely partisan lines consistent with citizens’ general orientation toward government regulation.

9.   The public generally underestimates vaccination rates and overestimates the rate of exemption. Only 9% of the survey respondents recognized that the vaccination rate among U.S. children aged 19-35 months for recommended childhood vaccinations has been over 90% in recent years. The median estimate was between 70-79%. The median estimate of children receiving no vaccinations was 2-10%; only 9% correctly indicated that less than 1% of children aged 19-35 months receive none of the recommended childhood vaccinations.

10.  Communications that assert the existence of growing concern over vaccination risks and declining vaccination rates magnify misestimations of vaccination rates and of exemptions. Experiment subjects who read communications patterned on real media communications underestimated vaccine coverage by an even larger amount than subjects in the control.

11.  Communications that connect “growing concern” over vaccine risks to disbelief in evolution and climate change generate cultural polarization. Relative to their counterparts in a control condition, experiment subjects exposed to such a communication divided along lines that reflected their predispositions toward currently disputed societal risks.

12.  Factually accurate information on vaccine rates, when issued by the CDC, substantially corrects underestimation of vaccination rates. Exposure to a story patterned on the press statements that the CDC typically issues in connection with annual NIS updates resulted in a significant correction of experiment subjects’ underestimation of national vaccination coverage.

B. Normative and prescriptive conclusions

1.   Risk communicators—including journalists, advocates, and public health professionals—should refrain from conveying the false impression that a substantial proportion of parents or of the public generally doubts vaccine safety. Such information risks creating anxiety rather than dispelling it. Moreover, by aggravating underestimation of vaccination rates, communications of this nature obscure a signal that conveys public confidence in vaccine safety and stimulates reciprocal motivations to contribute to the collective good of herd immunity.

2.   Risk communicators should avoid resort to the factually unsupportable, polemical trope that links vaccine risk concerns to climate-change skepticism and to disbelief in evolution as evidence of growing societal distrust in science. Such rhetoric, in addition to being facile, risks generating an affective or symbolic link between vaccines and issues on which cultural polarization is currently a significant impediment to public science communication.

3.   Risk communicators, including public health officials and professionals, should aggressively disseminate true information on the historically and continuing high rates of childhood vaccination in the U.S. The high levels of vaccination in the U.S. are a science communication resource. That resource should be exploited, not obscured or dissipated.

4.   Because there is a chance that it would make mandatory vaccination policies a matter of partisan contestation, campaigns to promote legislative elimination or contraction of existing grounds for exemptions should be viewed with extreme caution. There is reason to believe—from real-world experience as well as the results of this study—that proposals to restrict nonmedical exemptions from existing mandates would generate partisan division in the public. As evidenced by the controversy over the HPV vaccine, such divisions disrupt the processes by which ordinary citizens recognize and orient themselves with respect to the best-available evidence on public-health and other risks. Accordingly, the potential for creating polarization over childhood vaccination risks is a cost that must be balanced against whatever benefit might be obtained from reforms in law aimed at reducing the already very low percentage of parents that exempt their children from mandatory vaccination.

5.   Vaccine-risk assessments and communication should not be based on creative extrapolations from general theories. Because decision-science mechanisms can be imaginatively manipulated to support a wide variety of explanations and prescriptions, it is a mistake to present theoretical syntheses of work in this field as a guide for action. Instead, conjectures informed by decision-science frameworks should be treated as hypotheses for empirical investigation.

6.   Hypotheses relating to vaccine-risk perceptions and vaccine-risk communication should be tested with valid empirical methods specifically suited to measuring matters of consequence. Opinion polls cannot be expected to generate significant insight into vaccine risk perceptions, either on the part of parents, whose responses are unreliable indicators of behavior, or the general public, in whom demographic and attitudinal measures fail to explain practically meaningful levels of variance. Rather, behavioral measures (including validated attitudinal indicators of behavior) should be used to gauge parental risk concern and fine-grained, local methods used to investigate the characteristics of enclaves of demonstrated vaccine hesitancy.

7.   The public health establishment should take the initiative to develop comprehensive proposals for better integrating the science of science communication into its culture and practices. Procedures should be adopted, within government public health agencies and within the medical profession, for making use of the best available empirical methods for anticipating and averting influences that distort public risk perceptions. The public health establishment should also propagate professional norms geared to curbing ill-informed and ill-considered forms of ad hoc risk perception by the media and by individual members of the public-health establishment. The most effective step to discouraging this form of feral risk communication is to populate the niche it now occupies with an empirically informed and systematically planned alternative.

Wednesday
Apr092014

More on "Krugman's symmetry proof": it's not whether one gets the answer right or wrong but how one reasons that counts 

Okay, I've finally caught my breath after laughing myself into state of hyperventilation as a result of reading Krugman's latest proof (this is actually a replication of an earlier empirical study on his part) that ideologically motivated reasoning is in fact perfectly symmetric with respect to right-left ideology.

Rather than just guffawing appreciatively, it's worth taking a moment to call attention to just how exquisitely self-refuting his "reasoning" is!

There's the great line, of course, about how his "lived experience" (see? I told you, he's doing empirical work!) confirms that motivated cognition "is not, in fact, symmetric between liberals and conservatives."

But what comes next is an even more subtle -- and thus an even more spectacular! -- illustration of what it looks like when one's reason is deformed by tribalism: 

Yes, liberals are sometimes subject to bouts of wishful thinking. But can anyone point to a liberal equivalent of conservative denial of climate change, or the “unskewing” mania late in the 2012 campaign, or the frantic efforts to deny that Obamacare is in fact covering a lot of previously uninsured Americans?

Uh, no, PK. I mean seriously, no.

The test for motivated cognition is not whether someone gets the "right" answer but how someone assesses evidence.

A person displays ideologically motivated cognition when, instead of weighing evidence based on criteria related to its connection to the truth, he or she credits or dismisses it based on its conformity to his or her ideological predispositions.

Thus, if we want to use public opinion on some issue -- say, climate change -- to assess the symmetry of ideologically motivated reasoning, we can't just say, "hey, liberals are right, so they must be better reasoners."

Rather we must determine whether "liberals" who "believe" in climate change differ from "conservatives" who "don't" in how impartially they weigh evidence supportive of & contrary to their respective positions. 

How might we do that?  

Well, one way would be to conduct an experiment in which we manipulate the ideological motivation people with "liberal" & "conservative" values have to credit or dismiss one and the same piece of valid evidence on climate change.  

If "liberals" (it makes me shudder to participate in the flattening of this term in contemporary political discourse) adjust the weight they give this evidence depending on its ideological congeniality, that would support the inference that they are assessing evidence in a politically motivated fashion.  

If in aggregate, in the real world, they happen to "get the right" answer, then they aren't to be commended for the high quality of their reasoning.  

Rather, they are to be congratulated for being lucky that a position they unreasoningly subscribe to happens to be true.

And vice versa if the "truth" happens (on this issue or any other) to align with the position that "conservatives" unreasoningly affirm regardless of the quality of the evidence they are shown.

That Krugman is too thick to see that one can't infer anything about the quality of partisans' reasoning from the truth or falsity of their beliefs is ... another element of Krugman's proof that ideological reasoning is symmetric across right and left!

For in fact, "the 'other side' is closed-minded" is one of the positions that partisans are unreasoningly committed to. 

One of the beliefs that they don't revise in light of valid evidence but rather use in lieu of truth-related criteria to assess the validity of whatever evidence they see.

This proposition is supported by real, honest-to-god empirical evidence -- of the sort collected precisely because no one's personal "lived experience" is a reliable guide to truth.

That PK is innocent of this evidence is-- another element of his proof that ideological reasoning is symmetric across right and left!

As is his unfamiliarity with studies that use the design I just suggested to test whether "liberals" are forming their positions on climate change and other issues in a manner that is free of the influence of politically motivated reasoning.  Not surprisingly, these studies suggest the answer is no.

But does that mean that all liberals who believe in climate change believe what they do because of ideologically motivated cognition? Or that only someone who is engaged in that particular form of defective reasoning would form that belief?

If you think so, then, despite your likely ideological differences, you & Paul Krugman have something in common: you are both very poor reasoners.

Tuesday
Apr082014

Finally: decisive, knock-down, irrefutable proof of the ideological symmetry of motivated reasoning

Sometimes something so amazingly funny happens that you have to pinch yourself to make sure you aren't really just a celluar automaton in a computer-simulated comedy world.

 

 

 

N = 800 Krugmans. from Kahan, Judgment & Decision Making, 8, 427-34 (2013)

 

Tuesday
Apr082014

Are Ludwicks more common in the UK?!

Well, much like the administrators of the Affordable Health Care Act , I’ve learned the hard way how difficult it can be to anticipate and manage an excited tidal wave of interest surging through the internet toward one’s web portal.

Yes, “tomorrow” has arrived, but because I’ve been inundated with so many 10^3’s of serious entries for the latest MAPKIA, I’ve been unable to process them all, even with the help of my CCP state-of-the-art “big data” MAPKIA automated processor [cut & paste: http://www.palantir.net/2001/tma1/wav/foolprf.wav]

So taking a page from the President’s playbook, I’m extending the deadline of “tomorrow” to “tomorrow,” which is when I’ll post the “results” of the “Where is Ludwick” MAPKIA. In the meantime, entries will continue to be accepted.

But while we wait, how about some related info relevant to an issue that came up in discussion of the ongoing MAPKIA?

In response to my observation that Ludwick’s are “rare”—less than 3% of the U.S. population--@PaulMathews stated that “Ludwicks are not a rare species” in the UK but rather

are quite common. For example, two of our most prominent climate campaigners, Mark Lynas and George Monbiot, are pro-nuclear and pro-GMO.

Well it so happens that I have data that enables an estimation of the population frequency of Ludwicks—that is, individuals who are simultaneously (a) concerned about climate change risk but not much concerned about the risks of (b) nuclear power and (c) GM foods—in England.

Not the UK, certainly, but I think better evidence of what the true frequency is in the UK than reference to a list of commentators (indeed, compiling lists of “how many of x” one can think of is clearly an invalid way to estimate such things, given the obvious sampling bias involved, not to mention the abundant number of even people with very rare combinations of whatever in countries with populations in the tens or hundreds of millions). 

It turns out that Ludwicks are even rarer in England than in the U.S.  Consider:


Again, a scatterplot of survey respondents (1300 individuals from a nationally representative sample of individuals recruited to participate in CCP “cross-cultural cultural cognition” studies—including  the one in our forthcoming paper “Geoengineering and Climate Change Polarization”) arrayed in relationship to their perceptions of nuclear power and climate change risks.

I’ve defined a Ludwick as an individual whose scores on a 0-10 industrial strength risk perception measure  (ISRPM10) are ≥ 9 for global warming, ≤ 2 for nuclear power, and ≤ 2 for GM foods.

Those numbers are pretty close equivalents for the scores I used to compute U.S. Ludwicks on the 0-7 industrial strength risk perception measure (≥ 6, ≤ 2, & ≤ 2, respectively) in the data set I used for the MAPKIA (I determined equivalence by comparing the z-scores on the respective ISRPM7 and ISRPM10 scales).

As I said, less than 3% of the US population holds the Ludwick combination of risk perceptions.

But in England, less than 2% do!

But @PaulMathews shouldn’t feel bad—it’s just not easy to gauge these things by personal observation! I trust my own intuitions, and those of any socially competent and informed observe (@Paulmathews  certainly is) but verify with empirical measurement to compensate for the inevitably partial perspective any individual is constrained to have.

There are some other cool things that can be gleaned from this cross-cultural comparison—ones, in fact, that definitely surprised me but might well have informed @Paulmathews’ conjecture.

One is that there’s not nearly as much of an affinity between climate change risk perceptions and nuclear ones in the England  (r = 0.26, p < 0.01) as there is in the U.S. (r = 0.47, p < 0.01).

The reason that this surprised me is that in our study of “cross-cultural cultural cognition,” we definitely found that climate change risk perceptions in England fit the cultural-polarization profile (“hierarch individualists, skeptical” vs. “egalitarian communitarians, concerned”) that is familiar here.

Another thing: while the population frequency of Ludwicks is lower than in England than in the U.S., the probability of being a Ludwick conditional on holding the nonconformist pairing of high concern for climate and low for nuclear risks is higher in England.

In the scatterplot of English respondents, I’m defining the “Monbiot region” as the space occupied by survey respondents whose ISRPM10 scores for global warming and nuclear were  ≥ 9 for global warming, ≤ 2, respectively.

The analogous neighborhood in the U.S. is the “Ropeik region” (global warming ISRPM7 ≥ 6 and nuclear power ISRPM7 ≤ 2).

Whereas about 33% of the residents of the U.S. Ropeik region are Ludwicks, over 60% of the residents of Monbiot are Ludwicks.

Huh!

What does this signify?

No doubt something interesting, but I’m not sure what!

Do others have views? People who have a better grasp of English cultural meanings & who would be more likely than I to venture sensible interpretations (ones, obviously, that would still need to be empirically verified, of course)?

Could this information be of any use in constructing a successful Ludwick profile in the US (or in England for that matter)?

Saturday
Apr052014

Cognitive illiberalism & expressive overdetermination ... a fragment

from Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007).

Conclusion

The nature of political conflict in our society is deeply paradoxical. Despite our unprecedented knowledge of the workings of the natural and social world, we remain bitterly divided over the dangers we face and the efficacy of policies for abating them. The basis of our disagreement, moreover, is not differences in our material interests (that would make perfect sense) but divergences in our cultural worldviews. By virtue of the moderating effects of liberal market institutions, we no longer organize ourselves into sectarian factions for the purpose of imposing our opposing visions of the good on one another. Yet when we deliberate over how to secure our collective secular ends, we end up split along exactly those lines.

The explanation, I’ve argued, is the phenomenon of cultural cognition. Individual access to collective knowledge depends just as much today as it ever did on cultural cues. As a result, even as we become increasingly committed to confining law to attainment of goods accessible to persons of morally diverse persuasions, we remain prone to cultural polarization over the means of doing so. Indeed, the prospect of agreement on the consequences of law has diminished, not grown, with advancement in collective knowledge, precisely because we enjoy an unprecedented degree of cultural pluralism and hence an unprecedented number of competing cultural certifiers of truth.

If there’s a way to mitigate this condition of cognitive illiberalism, it is by reforming our political discourse. Liberal discourse norms enjoin us to suppress reference to partisan visions of the good when we engage in political advocacy. But this injunction does little to mitigate illiberal forms of status competition: because what we believe reflects who we are (culturally speaking), citizens readily perceive even value-denuded instrumental justifications for law as partisan affirmations of certain worldviews over others.

Rather than implausibly deny our cultural partiality, we should embrace it. The norm of expressive overdetermination would oblige political actors not just to seek affirmation of their worldviews in law, but to cooperate in forming policies that allow persons of opposing worldviews to do so at the same time. Under these circumstances, citizens of diverse cultural orientations are more likely to agree on the facts—and to get them right—because expressive overdetermination erases the status threats that make individuals resist accurate information. But even more importantly, participation in the framing of policies that bear diverse meanings can be expected to excite self-reinforcing, reciprocal motivations that make a culture of political pluralism sustainable.

Ought, it is said, implies can. Contrary to the central injunction of liberalism, we cannot, as a cognitive matter, justify laws on grounds that are genuinely free of our attachments to competing understandings of the good life. But through a more sophisticated understanding of social psychology, it remains possible to construct a form of political discourse that conveys genuine respect for our cultural diversity.

Friday
Apr042014

Let's keep discussing Ludwick!

Nothing to say today that would be as interesting as the points people are making in response to the"MAPKIA!" challenge in "yesterday's" post.  Join in the discussion -- & submit your entry! It's a little bit like doing presidential polls 2.5 yrs in advance of the next election, but @Jen is definitely the frontrunner at this stage.

Page 1 ... 3 4 5 6 7 ... 25 Next 20 Entries »