follow CCP

Recent blog entries
Tuesday
Oct072014

What I believe about teaching "belief in" evolution & climate change

I was corresponding with friend, someone who has done really great science education research, about the related challenges of teaching evolution & climate science to high school students.  

Defending what I've called the "disentanglement principle"-- the obligation of those who are responsible for promoting comprehension of science to create an environment in which free, reasoning people don’t have to choose between knowing what’s known and being who they are-- I stated that I viewed "the whole concept of 'believing' [as] so absurd . . . ."  

He smartly challenged me on this:

I must admit, however, that I do not find the concept of believing to be absurd. I for example, believe that I have been married to the same women since I was XX years old. I also believe that I have XX children. I also believe that the best theory to explain modern day species diversity is Darwin's evolution theory. I do not believe the alternative theory called creationism. Lastly, I believe that the Earth is warming due largely to human caused CO2 emissions. These beliefs are the product of my experience and a careful consideration of the alternatives, their predictions, and a comparison of those prediction and the evidence. This is not a matter of who I am ( for example it matters not whether I am a man or a women, straight or gay, black or white) as much as it is a matter of my understanding of how one comes to a belief in a rational way, and my willingness to not make up my mind, not to form a belief, until all steps of that rational way have been completed to the extent that no reasonable doubt remains regarding the validity of the alternative explanations that have been advanced. 

His response made me realize that I've been doing a poor job in recent attempts to explain why it seems to me that "belief in" evolution & global warming is the wrong focus for imparting and assessing knowledge of those subjects.

I don't think the following reply completely fixes the problem, but here is what I wrote back:

I believe you are right! 

In fact, I generally believe it is very confused and confusing for people to say "X is not a matter of belief; it's a fact ....," something that for some reason seems to strike people as an important point to make in debates about politically controversial matters of science. 

Scientists "believe" things based on evidence, as you say, and presumably view "facts" as merely propositions that happen to be worthy of belief at the moment based on the best available evidence. 

I expressed myself imprecisely, although it might be the case that even when I clarify you'll disagree.  That would be interesting to me & certainly something I'd want to hear and reflect on. 

What I meant to refer to  as "absurd" was the position that treats as an object of science education students' affirmation of "belief in" a fact that has been transformed by cultural status competition into nothing more than an emblem of affiliation. 

That's so in the case of affirmation of "belief in" evolution. To my surprise, actually, I am close to concluding that exactly the same is true at this point of affirmation of "belief in" global warming. 

Those who say they "believe in" climate change are not more likely to know anything about it or about science generally than those who say they don't "believe"-- same as in the case of evolution.  

Saying one "disbelieves" those things, in contrast, is an indicator (not a perfect one, of course) of having a certain cultural identity or style-- one that turns out to be unconnected to a person's capacity to learn anything.  

So those who say that one can gauge anything about the quality of science instruction in the US from the %'s of people who say that they "believe in" evolution or climate change is, in my view, seriously mistaken. 

Or so I believe--very strongly-- based on my current assessment of the best evidence, which includes [a set of extremely important studies] of the the effective teaching of evolution to kids who "don't believe" it.  I'd be hard pressed to identify a book or an article much less a paragraph that conveyed as much to me about the communication of scientific knowledge as this one: 

[E]very teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to [change students’ beliefs], not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern—a pattern that appears to be necessary for independent learning and critical thought.

 Maybe you now have a better sense of what I meant to call "absurd," but now it occurs to me too that "absurd" really doesn't capture the sentiment I meant to express.

It makes me sad to think that some curious student might not get the benefit of knowing what is known to science about the natural history of our (and other) species because his or her teacher made the understandable mistake of tying that benefit to a gesture the only meaning of which for that student in that setting would be a renunciation of his or her identity. 

It makes me angry to think that some curious person might be denied the benefit of knowing what's known by science precisely because an "educator" or "science communicator" who does recognize that affirmation of "belief in" evolution signifies identity & not knowledge nevertheless feels that he or she is entitled to exactract this gesture of self-denigration as an appropriate fee for assisting someone else to learn.

Such a stance is itself a form of sectarianism that is both illiberal and inimical to dissemination of scientific knowledge. 

I have seen that there are teachers who know  the importance of disentangling the opportunity to learn from the the necessity to choose sides in a mean cultural status struggle, but who don't know how to do that yet for climate science education.  They want to figure out how to do it; and they of course know that the way to figure it out is to resort to the very forms of disciplined observation, measurement, and inference that are the signatures of science.

I know they will succeed.  And I hope other science communication professionals will pay attention and learn something from them.

Sunday
Oct052014

Weekend update: New paper on why "affirmative consent" (in addition to being old news) does not mean "only 'yes' means yes" 

As I explained in a recent post, the media/blogosphere shit storm over the "affirmative consent" standard Calif just mandated for campus behavioral codes displays massive unfamiliarity with existing law & with tons of evidence on how law & norms interact.  

First, the "affirmative consent" standard isn't a radical "redefinition" of the offense of rape.  It's been around for three decades.

Second, contrary to what the stock characters who are today reprising the roles from the 1990s "sexual correctness" debate are saying, an "affirmative consent" standard certainly doesn't require a verbal "yes" to sexual intercourse. It simply requires communication of consent by acts or words.

Third, for exactly that reason it hasn't changed outcomes in cases in which decision makers--jurors, judges, university disciplinary board members, etc. -- assess date rape cases.

Because members (male & female) of certain cultural subcommunities subscribe to norms in which a woman can "consent" to sex despite saying "no," decisionmakers who interpret facts against the background of those norms will sill treat various forms of behavior -- including suggestive dress, consensual sexual behavior short of intercourse, etc.-- as "communicating" that a woman who says "no" really meant yes.  

When those individuals apply the "affirmative consent" standard, they reach the same result that they would have reached under the traditional common-law definition -- or indeed that they would have reached if they were furnished no definition of rape at all. 

Today I happened to come across an intersting new paper that presents a review of the literature on these dynamics & that adds a relevant analysis on how cultural norms influence testimony of the parties.

In Honest False Testimony in Allegations of Sexual Offenses, J. Villalobos, Deborah Davis, & Richard Leo explain why the same norms that influence decisionmakers' perceptions of "consent" in date rape cases--including ones in which a woman says no--are likely to shape the perceptions of the parties, whose conflicting "honest" testimony will create doubt on the part of decisionmakers. This dynamic, they conclude, helps explain why "cultural predispositions often outweigh legal definitions of sexual consent when individuals make assessments of whether consent has been granted."

People genuinely interested in this issue might want to read it.  

Those playing the stock characters in the media remake of the 1990s (and earlier) reform debate probably won't-- if they had any interest in what the law actually is and how social norms have constrained enforcement of reform formulations of rape, they'd have already been familiar with much of this literature & would have recognized that their positions are actually divorced from reality.

Again, changing behavior on campuses requires changing norms.  Moreover, rather than being an effective instrument for norm change, legal reforms--including affirmative consent standards-- have in the past been rendered impotent b/c of the impact that norms have in shaping decisionmakers' understanding of what those standards mean.

This is a hard issue.  

Maybe a reform like Estrich's "no means no" standard-- an irrebuttable presumption that uttering "no" constitutes lack of consent-- would actually change results by blocking decisionmakers' reliance on contrary social norms.  There's some experimental evidence that this is so.

Or maybe (as some argue) it would produce a backlash that would further entrench existing norms.

Accordingly, maybe the emphasis should be on trying to promote forms of behavior that, through one or another mechanism of social influence, will change norms on campuses.  

That sort of thinking is likely the motivation for the Obama Administration's new "It's on Us" social marketing  campaign.

But one thing is clear: nothing will change if people ignore evidence -- on what the law is, on social norms, and on what real-world experience shows about how the two interact -- and instead opt to engage this issue through platitudinous claims the only function of which is to signify whose "team" people are on in a culture conflict only tangentially connected to the problem at hand.

 

Thursday
Oct022014

What happens to Pat's perceptions of climate change risks as his/her science literacy score goes up?

A curious and thoughtful correspondent asks:

A while ago, I had read your chart with two lines in red and blue, showing the association between scientific literacy and opinion on climate change separately for liberals and conservatives. [A colleague] gave it favorable mention again in her excellent presentation at the * * * seminar today. 

The subsequent conversation reminded me that I had always wanted to see in addition the simple line chart showing the association between scientific literacy and opinion on climate change for all respondents (without breakdown for liberals and conservatives). Have you ever published or shared that? Please share chart, or, if you haven't ever run that one, please share the data?
Much thanks!

Sure!  

The line that plots the relationship for the sample as a whole will be exactly in between the other 2 lines.  The "right/left" measure is a composite Likert scale formed by summing the (standardized) responses to 5-point left-right ideology & 7-point party-identification measure. In the figures you are referring to, the relationship between science literacy and partisan identity is plotted separately for subjects based on their score in relation to the mean on that scale. 

I've added a line plotting "sample mean" relationship between global warming risk perceptions (measured on the "Industrial Strength Risk Perception Measure") to figures for two data sets, one in which subjects' science comprehension was measured with "Ordinary Science Intelligence 1.0" (used in the CCP Nature Climate Change study) & other in which the same was measured with OSI_2.0

click me for more detail! & for time of your life, I promise!

I'm sure you can see the significance (practical, as well as "statistical") of this display for the question you proposed, viz., "What's the impact of science literacy in general, for the population as a whole, controlling for partisanship, etc?" 

It's that the question has no meaningful answer.

The main effect is just a simple average of the opposing effects that science comprehension has on climate change risk perceptions (beliefs, etc) conditional on one's cultural identity (for which right-left political outlooks are only 1 measure of many). 

If the effect is "positive" or "negative," that just tells you something about the distribution of cultural-affinities, the relative impact of such affinities on risk perceptions, &/or differences in the correlation between science comprehension and cultural outlooks (which turn out to be trivially small, too) in that particular sample.

Maybe this scatterplot can get this point across visually:

 

In sum, because science comprehension interacts with cultural identity and b/c everyone identifies more or less with one or another cultural group, talking about the "main" effect is not a meaningful thing to do.  All one can say is, "the effect of science comprehension on perceptions of climate change risk depends on who one is." 

Or put it this way: the question, "What's the effect of science comprehension in general, for the population as a whole?" amounts to asking what happens to Pat as he/she becomes more science comprehending.  Pssssst . . . Pat doesn’t exist!

 

Again, I'm sure you get this now that you've seen the data, but it's quite remarkable how many people don't.  How many want to seize on the (trivially small) "main effect" & if it happens to be sloped toward their group's position, say "See! Smart people agree with our group! Ha ha! Nah, nah, boo, boo!" 

They end up looking stupid. 

Not just because anyone who thinks about this can figure out what I've explained about the meaninglessness of "main effect" when the data display this relationship. 

But also because when we see his relationship and when the "main effect" is this small, that effect is likely to shift direction the next time someone collects data, something that could happen for any of myriad non-consequential reasons (proportion of cultural types in the sample, random variation in the size of the interaction effect, slight modifications in the measure of science literacy). At that point, those who proclaimed themselves the "winners" of the last round of the "whose is bigger" game look like fools (they are, aren't they?).

But like I said, it happens over & over & over & over ....

But how about some more information about Pat? And about his/her cultural worldview & ideology & their effect on his/her beliefs about climate change?  Why not-- we all love Pat!

 

 

Monday
Sep292014

Why the science of science communication needs to go back to highschool (& college; punctuated with visits to museum & science film-making studio)

I got to be opening act for former Freud expert & current stats legend Andrew Gelman (who focused mainly on stats but so as not to disappoint expectations of 85% of audience did mention Freud) at SENCER symposium in DC

Of course, the audience really loved him b/c he spoke, among other things, about how commonplace yet weird it is that people who teach students about validity, reliability, sample selection & other essentials of empirical measurement never stop to examine whether the methods they are using to impart such knowledge are valid, reliable, informed by unbiased sample of observations etc. 

Degrading and ultimately destroying this “self-measurement paradox” is at the core of SENCER’s mission!

And as is often happens when one goes to war with an evil & devious enemy, “mission creep” is setting in—which as far as I'm concerned is a very good thing in SENCER's case.

We both got sent to Principal's office for spitball fight *we* didn't even start!An extension of one of the themes of SENCER’s summer institute, the session I did with Gelman was focused on how the self-measurement paradox affects self-government. Our democracy fails to make use of the best evidence it has on myriad issues—from the vaccination of adolescents for HPV to rising global temperatures—as a result of pervasive inattention to empirical evidence on how ordinary citizens come to know what’s known by science.

Or so I argued (slides here)—and I think Gelman was broadly in agreement, although he worried whether the thought-free, “which button do I push?” culture in the social sciences is rendering it incapable of helping us to gain any insight into these & other matters. . . .

For my part, though, I addressed the question essentially of where SENCER should be focusing its attention if it plans to “scale up” its focus from the science classroom, the museum, and the science programming studio to the democratic political arena. 

My answer: the science classroom, science museum, and science programming studio!

The argument wasn’t any variant of the “knowledge deficit” thesis—the idea that the reason we see persistent political conflict on issues issues like climate change or gun control is that people lack either familiarity with the best evidence on such issues or the capacity to make sense of it.

Rather it was that the sites of formal and informal science education[1] are ideal laboratories for studying how to counteract the dynamics now stifling constructive pubic enagement with policy-relevant science.

The basis of this claim is the central thesis of The Measurement Problem.  The data reported in that paper support the conclusion that what people believe about whether human activity is really causing global warming don’t reveal what they know but express who they are

In fact, the vast majority of climate change “believers” and climate change “skeptics” lack genuine comprehension of even the most elementary aspects of climate change science.  They actually get (believers and skeptics alike) that adding CO2 to the atmosphere heats the atmosphere—but think that CO2 emissions will kill plant life by stifling photosynthesis.  They all know (again, believers and skeptics) that climate scientists believe that increased global warming will result in coastal flooding—but mistakenly believe that climate scientists also think such warming will increase the incidence of skin cancer....

There is a small segment of highly science-literate citizens who can reliably identify what the prevailing scientific view is on the sources and consequences of climate change.  But they are no less polarized than the rest of society on whether human activity is causing global warming!

What people “believe” about global warming indicates, in a measurement sense, the sort of person they are in the same way that political party identification, religiosity, and cultural worldviews do.  The positions they take are, in fact, a way for them to convey their membership in & loyalty to affinity groups that are integral to their social status and to their simple everyday interactions.

Sadly, “who are you, whose side are you on?" is what popular political disoucrse on the "climate change question" measure, too. 

Al Gore is right that that climate debate is  a struggle for the soul of America” —and that is exactly the problem.  If we could disentangle the question “what do we know” from the question “whose side are you on,” then democratic engagement with the best evidence would be able to proceed.  Of course, at that point what to do would then still depend massively on what diverse people care about; but fashioning policy amidst differences of that sort is a perfectly ordinary part of democratic life.

But as I explained in the talk, this sort of reason-preempting entanglement of empirical facts in antagonistic cultural meanings is not new for science educators.  They’ve have to deal with it most conspicuously in trying to teach students about evolution. 

What people “believe” about evolution likewise has zero correlation with what people know about the scientific evidence on the natural history of human beings or about any other insight human beings have acquired by use of science’s signature methods of observation, measurement, and inference.  “Belief” and “disbelief,” too, are expressions of identity.

But precisely because that’s what they are—precisely b/c free and reasoning people predictably, understandably use their reason to form and persist in positions that advance their stake in maintaining bonds with others who share their outlooks—the teaching of evolution is fraught.  I’m not talking about the politics of teaching evolution; that’s fraught, too, of course.  I’m talking about the challenge that a high school or college instructor faces in trying to make it possible for students who live in a world where positions on evolution express who they are to actually acquire knowledge and understanding of what it is science knows about the natural history of our species.

To their immense credit, science education researchers have used empirical methods to address this challenge.  What they’ve discovered is that a student’s “disbelief” in evolution in fact poses no barrier whatsoever to his or her learning of how random mutation and genetic variance combine with natural selection to propel adaptive changes in the forms of living creatures, including humans. 

After mastering this material, the students who said they “disbelieved” still say they “disbelieve” in evolution.  That’s because what people say in response to the “do you believe in evolution” question doesn’t measure what they know; it measures who they are. 

Indeed, the key to enabling disbelievers to learn the modern synthesis, this research shows, is to disentangle those two things—to make it plain to students that the point of the instruction isn’t to make them change their “beliefs but to impart knowledge; isn’t to make them into some other kind of person but to give them evidence along with the power of critical discernment essential to make of it what they will.

In my SENCER talk, I called this the “disentanglement principle”: those who are responsible for promoting comprehension of science have to create an environment in which free, reasoning people don’t have to choose between knowing what’s known and being who they are.

That’s going to be a huge challenge for classroom science teachers as well as for museum directors and documentary filmmakers and other science-communication professionals as they seek to enable the public—all of it, regardless of its members diverse identities—to understand what science knows about climate.

And they have shown, particularly in the science education domain, that they know the value of using valid empirical methods to implement the disentanglement principle.

SENCER, because it is already very experienced in facilitating empirical investigation aimed at improving the craft norms of science educators, should definitely be supporting science educators, formal and informal, in meeting the challenge of figuring out how to disentangle “who are you, what side are you on” from “what do we know” in the communication of climate science.

And it should be doing exactly that, I argued, as a means of satisfying SENCER’s own goal of combatting the “self-measurement paradox” in democratic politics!

The entanglement problem that science educators (formal and informal) face is exactly the one that is impeding constructive public engagement with climate change and other culturally polarizing issues that turn on policy-relevant science.  How to disentangle identity and knowledge is exactly what those who study science communication in democratic politics need to investigate by valid empirical means.

Valid empirical study of these dynamics, moreover, demands designs and measures that actually engage them.

Too much of the work being done on public opinion & climate change, in my view, lacks this sort of validity.  Indeed, the mistake of thinking that “moving the needle” on “belief in climate change” by furnishing people with “information,” including the existence of “scientific consensus” on global warming (something polarized citizens already know) is a consequence of over-reliance on public opinion surveys that presuppose flawed theories about the nature of public conflict in this area.

In a series of recent posts, I discussed the concept of external validity—the correspondence, essentially, between study designs and the sort of real-world conditions that those studies are supposed to be modeling.

Neil Stenhouse very usefully supplemented the series with a discussion of the “translation science” methods featured in public health and other disciplines to bridge the inevitable gap between externally valid lab studies and the real-world settings to which lab insights need to be adapted (indeed, disregard of this issue is another serious deficit in current science of science communication work).

The dynamics that must be understood to implement the “disentanglement principle” in science classrooms, science museums, and science documentary studios are, in my view, the same ones that must be understood to dispel cultural polarization over decision-relevant science in democratic politics.  Accordingly, empirical investigations conducted in those educational settings are the ones most likely to be both  externally valid and amenable to adaptation to democratic policymaking via field-baed “translation science” studies.

To illustrate this point, I discussed in my talk how the “disentanglement principle” has informed CCP field studies conducted on behalf of the Southeast Florida Climate Compact, whose success, I think, reflects the skill of its members in focusing citizens' attention on the unifying question of “what do we know” & avoiding the divisive question “who are you, whose side are you on?” that dominates the national climate debate.

In sum, science of science communication researchres working on our democracy’s science communication problem need to go back to high school, and to college.  They should also be spending more time in museums and science filmmaking studios, collaborating with the professionals there on empirical investigation of efforts to implement the “disentanglement principle.”

Or at least how things now look to me.

What do you think? 


[1] actually, I think the concept of “informal science education” is kind of goofy; science museums and science tv & internet programming respond to the public’s appetite to apprehend what’s known, not a societal need for extension courses!)

Friday
Sep262014

Are military investigators culturally predisposed to see "consent" in acquaintance rape cases?

This is the last (unless it isn't) installment in a series of posts on cultural cogntion and acquaintance rape. The first excerpted portions of the 2010 CCP study reported in the paper Culture, Cognition, and Consent: Who Sees What and Why in Acquaintance Rape Cases, 158 U. Penn. L. Rev. 729 (2010). The next, drawing on the findings of that study, offered some reflections on the resurgence of interest in the issue of how to define "consent" in the law generally and in university disciplinary codes.

Below are a pair of posts. The first is by Prof. Eric Carpenter, who summarizes his important new study on how cultural predispositions could affect the perceptions of the military personnel involved in investigating and adjudicating rape allegations. The second presents some comments from me aimed at identifying a set of questions--some empirical and methodological, and some normative and political--posed by Carpenter's findings.

Culture, Cognition & Consent in the U.S. Military


The American military is in a well-publicized struggle to address its sexual assault problem.  In 1991, in the wake of the Tailhook scandal, military leaders repeatedly and publically assured Congress that they would change the culture that previously condoned sexual discrimination and turned a blind eye to sexual assault. 

Over the past two decades, new sexual assault scandals have been followed by familiar assurances, and Congress’s patience has finally run out.  As a result, the Uniform Code of Military Justice (UCMJ) is currently undergoing its most significant restructuring since it went into effect in 1951.  The critical issue who is going to make the decisions in these cases: commanders, as is the status quo, or somebody else like military lawyers or civilians.  

What does any of that have to do with the Cultural Cognition Project, you might ask?  Well, I was serving as a professor at the Army's law school when I read Dan's article, Culture, Cognition, and Consent

Those of us at the school were working very hard to train military lawyers and commanders on the realities of sexual assault and to dispel rape myths.  At a personal level, I was often frustrated by the resistance many people showed to this training, particularly the military lawyers.  I suspected this was because rape myths are rooted in deeply-held beliefs about how men and women should behave, and I could not reasonably expect to change those beliefs in a one-hour class.

One of Dan's findings, broadly summarized, was that those who held relatively hierarchical worldviews agreed to a lesser extent than those with relatively egalitarian worldviews that the man in a dorm-room rape scenario should be found guilty of rape. 

My reaction to his finding was a mixture of "ah-ha" and "uh-oh."  The military is full of hierarchical people. 

Continue reading

 
Is military cultural cognition the same as public cultural cognition? Should it be?


I’m really glad Eric Carpenter did this study.  I have found myself thinking about it quite a bit in the several weeks that have passed since I read it.  The study, it seems to me, brings into focus a cluster of empirical and normative issues critical for making sense of cultural cognition in law generally.  But because I think it’s simply not clear how to resolve these issues, I'm not certain what inferences—empirical or moral—can be drawn from Eric’s study.



 

Thursday
Sep252014

Date-rape debate deja vu: the script is 20 yrs out of date

There's definitely a new strategy being deployed to combat sexual assault on college campuses.

Along side it, however, is a debate that is neither new nor interesting.  

On the contrary, it features a collection of stock characters who appear to have spent the last twenty years at a Rip van Winkle slumber party.

The alarm bell that woke them up was the Obama Administration's two-prong initiative to reduce campus sexual assaults.

The first part aims to pressure universities to more aggressively enforce their own disciplinary rules against sexual assault.

The second seeks to activate campus social norms. The goal of the White House’s “It’s on Us” campaign is to promote a shared sense of responsibility, particularly among male students, to intervene personally when they observe conditions that seem ripe for coerceive sexual behavior.

The initiative reflects a sophisticated appreciation of what over a quarter century of evidence has shown about the limits of formal penalties in reducing the incidence of nonconsensual sex.

From the 1980s onward, numerous states enacted reforms eliminating elements of the traditional common law definition of rape that advocates (quite plausibly) thought were excusing men who disregard explicit, unambiguous verbal nonconsent (“No!”) to sex.

These reforms, empirical researchers have concluded, have had no observable impact on the incidence of rape (Clay-Warner & Burt 2005; Schulhofer 1998).

One likely reason is the tendencey of people to conform their understanding of legal definitions of familiar crimes—robbery, burglary, etc.-- to “prototypes” or socialized understandings of what those offenses consist in.  Change the legal definition, and people will still find the elements to be satisfied depending on the fit between the facts at hand and their lay prototype (Smith 1991).

A CCP study found exactly this effect for reform definitions of rape (Kahan 2010). 

In a mock jury experiment based on an actual rape prosecution, the likelihood subjects would vote to convict a male college student who had intercourse with a female student who he admitted was continually saying “no” was 58% among the large, nationally representative sample.

That probability did not vary significantly (in statistical or practical terms) regardless of whether the subjects were instructed to apply the traditional common law definition of rape (“sexual intercourse by force or threat of force without”); a “strict liability” alternative that eliminated the“reasonable mistake of fact defense”; or a reform standard, in use in multiple states, that both eliminates the "force or threat" element and the mistake of fact defense and in addition uses an "affirmative consent” standard (“words or overt actions indicating a freely given agreement to have sexual intercourse”).

Indeed, the likelihood that subjects instructed to apply one these standards would convict didn’t differ meaningfully from the likelihood that subjects furnished no definition of rape at all would.

Interestingly, if one looks at case law, the same effect seems to apply to judges.  When legislators reform one or another aspect of the common-law definition, courts typically reinterpret the remaining elements in a manner that constrains any expansion of the law's reach (Kahan 2010).

One could reasonably draw the conclusion that changing the rules won't work unless one first changes norms (Baker 1998).  I think that's what the Obama Administration believes.

The stock characters, in contrast, believe a lot of weird things wholly unconnected to the evidence on laws, norms, and sexual assault.

In a goofy NY Times Op-ed entitled “ ‘Yes’ Is Better Than ‘No,’ ” e.g., Gloria Steinem and Michael Kimmel incongruously call for replacing the “the prevailing standard” of “ 'no means no’ ” with the “affirmative consent” standard that California has recently mandated its state universities use.

To start, "No means no" is not the "prevailing standard." It isn't the law anywhere.

In addition, an "affirmative consent" standard, which is already being used in various jurisdictions, does not require an "explicit 'yes' " in order to support a finding of "consent.

What sorts of words and behavior count as communicating “affirmative, conscious, and voluntary agreement to engage in sexual activity" are for the jury or administrative factfinder to decide.  

If such a decisionmaker believes that women sometimes say "no" when they "really" do intend to consent to sex, then that judge, juror, or college disciplinary board member necessarily accepts the view that verbally protesting women can communicate "yes" by other means, such as dressing provocatively, voluntarily accompanying the alleged assailant to a secluded space, engaging in consensual behavior short of intercourse etc.

Because it doesn't genuinely constrain decisionmakers to treat "no" as "no" to any greater extent than it constrains men to do so, "affirmative consent," evidence shows, hasn't changed the outcomes in such cases.

In fact, the standard California is mandating for university disciplinary proceedings— “affirmative, conscious, and voluntary agreement to engage in sexual activity”—is not meaningfully different from the one that already exists in California penal law (“positive cooperation in act or attitude” conveyed “freely and voluntarily”). If there's a problem with the current standard, this one won't fix it.

The "affirmative consent" standard's failure to block reliance on the social understanding that "no sometimes means yes" is exactly the problem, according to some people who actually know what the law is and how it works. Their proposal, presented  by Susan Estrich in her landmark book Real Rape (1988), is that the law simply treat the uttering of the words "no" as  irrebuttable proof of lack of consent.  That would prevent decisionmakers from relying on social conventions implying that women can "voluntarily," "consciously," "freely," affirmatively" etc. communicate consent even when they say no.

The CCP study furnishes some support for thinking this sort of standard might well change something. In the mock juror experiment, the only standard that increased the probability that study participants would find the defendant guilty was Estrich's "no means no" standard.

It would be really useful to have some real-world evidence, too.  But again, far from being the "prevailing standard," "no means no" is not genuinely how any state defines lack of consent for sexual assault.

Are Kimmel & Steinam really arguing with those who propose such a standard? No; they simply aren't talking to anyone who actually knows what the law is or how it has worked for the last quarter century.

Same for those playing the other stock characters.

One of these is the deeply concerned law professor. Picking up the lines of a twenty-year old script, he assures us that he knows how very very serious rape is. Nevertheless, he is quite worried that the “vagueness” of requiring the affirmative consent standard will subject men who are behaving perfectly consistently with social convention to risk of punishment. Requiring proof of something clear like "force or threat of force" is essential to avoid such a perverse outcome.

Again, the reforms opposed by the angst-ridden professor have been in place in many jurisdictions for decades. They don’t change how juries and courts decide cases relative to the (equally vague!) traditional definition of the offense of rape or any other definition that is actually in use.  Because decisionmakers construe reform provisions consistent with the social prototype of rape that prevails in their communities, the deeply concerned law professor needn't worry that an affirmative consent standard will “unfairly surprise” a man who mistakenly infers that a woman who says "no" (over & over) actually means "yes!"

Then there is the “reactionary conservative” (a role still played by George Will).  He worries now (just as he did in 1993) that requiring affirmative consent is part of a plot to increas[e] supervision by the regulatory state that progressivism celebrates.” 

Hey-- grumpy old reactionary dude: just calm down. I'm pretty sure that if the "affirmative consent" standard were really a communist trojan horse, the Bolsheviks would have climbed out of it by now!

There’s also the character who has assumed the familiar role of “postmodern” super-liberated “vamp” feminist.  She remains concerned that the “unrealistic” and “vague” affirmative consent standard is going to actually restrict her autonomy by deterring liability-wary men from having sex with her.

She should calm down too—unless, of course, her goal is to get people to pay attention to her for reprising this trite role. Her right to have as much sex as she likes will not be affected in the slightest!

Indeed, those now playing the role of vamp, grumpy conservative, deeply disturbed law professor, and egalitarian rape-law reformer also seem to be unaware of the evidence on who does feel most threatened by rape law reform and why.

Despite the rhetoric one sometimes hears, the issue of whether “no” really should mean no for purposes of the law does not pit men against women.

The dispute is one between men and women who share one set of cultural outlooks and men and women who share another.

Looking at individual-level predictors, the CCP study found that members of the public who were relatively hierarchical in their cultural outlooks were substantially more likely than others to acquit of rape a man who admittedly disregarded the complainant’s repeated statement “no” than individuals who were culturally egalitarian. 

The disparity between these groups was unaffected by the legal standard the subjects were instructed to apply.

It was magnified, however, by gender: women with hierarchical values were the most likely to see the complainant as having consented despite her verbal protests.

The study hypothesized such a result based on other empirical work on the "token resistance" script. Based on survey and attitudinal data, this work suggested that individuals who subscribe to hierarchical norms attribute feigned resistance to a woman’s Wanna *see* what the raw data featured in the regression model look like? You always should!strategic intention to evade the negative reputational effects associated with defying injunctions against premarital or casual sex.

Although both male and female hierarchs resent this behavior, the latter are in fact the most aggrieved by it.  They understand the individual woman who resorts to “token resistance” as attempting to appropriate some portion of the status due  women who genuinely conform to hierarchical norms (Muehlenhard & Hollabaugh 1988; Muehlenhard & McCoy 1991; Wiederman 2005).

In the spirit of convergently validating these findings, the CCP mock juror experiment posited that women with hierarchical values—particularly older ones who already had acquired significant status—would be most predisposed to form perceptions of fact consistent with a legal judgment evincing social condemnation of women who resort to this form of strategic behavior.

That this proved to be so is perfectly consistent with the conventional wisdom among criminal defense attorneys, too.

Roy Black famously secured an acquittal for William Kennedy Smith through his adroit selection of a female juror who met this profile and who ended up playing a key role in steering the jury to a not guilty verdict in her role as jury foreperson.

Experienced defense lawyers know that when the college football payer is on trial for date rape, the ideal juror isn’t Kobe Bryant; it’s Anita Bryant.

Women with these hierarchical outlooks have played a major role in political opposition to rape-law reform too.

These are Todd Akin’s constituents, “women who think that they have in some ways become less liberated in recent decades, not more; who think that easy abortion, easy birth control and a tawdry popular culture have degraded their stature, not elevated it.” Because of the egalitarian meanings rape reform conveys, they see it as part and parcel of an assault on the cultural norms that underwrite their status.

To tell you the truth, I’m not sure if the stock characters in the carnival debate triggered by the Obama Administration’s initiative are unaware of all this or in fact are simply happy to be a part of it.

I don’t see the Administration Initiative itself, however, as part of the cultural-politics date rape debate. It's the product of thinking that takes account of the experience of the last quarter century. 

Again, precisely because experience has shown that changing the wording of rules is not an effective means for reducing the incidence of acquaintance rape, many serious commentators have concluded that changing attitudes is (Baker 1999).

The Obama Administration's “It’s on Us”  campaign bears the clear signature of this way of thinking. By exhorting male students, in particular, to accept responsibility to intervene when they sense conditions conducive to coercive sexual behavior, the campaign is intended to fill students’ social field of vision with vivid new prototypes to counteer the ones that constrain the use of rules to regulation nonconsensual sex.

The voluntary assumption of the burden to protect others from harm can be expected to inspire a reciprocal willingness on the part of others to do the same.

Examples of such intervention, against the background of common understanding of why it's now taking place, will evince a shared understanding that a form of conduct that many likely regarded as "consistent with social convention" is in fact one that others now see as a source of harm.

And observing concerted action of this kind will recalibrate the calculations of those who might previously have believed that behavior manifestly out of keeping with common expectations would evade censure.

In a community with reformed norms of this sort, new rules might well accompany changes in behavior, not because they supply new instructions for decisionmakers but because they reflect internalized understandings of what forms of conduct manifest violate the operative legal standard, whatever it happens to be.

Will this social-norm strategy work?

The Obama Administration Initiative will generate some useful evidence-- at least for those who actually pay attention to what happens when people try innovative measures to solve a difficult problem.

 

References

Baker, K. K. (1999). Sex, Rape, and Shame. B.U. L. Rev., 79, 663.

Clay-Warner, J., & Burt, C. H. (2005). Rape Reporting After Reforms: Have Times Really Changed? Violence Against Women, 11(2), 150-176. doi: 10.1177/1077801204271566

Estrich, S. (1987). Real rape. Cambridge, Mass.: Harvard University Press.

Kahan, D. M. (2010). Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases. University of Pennsylvania Law Review, 158, 729-812. 

Muehlenhard, C. L., & Hollabaugh, L. C. (1988). Do Women Sometimes Say No When They Mean Yes? The Prevalence and Correlates of Women's Token Resistance to Sex. Journal of Personality & Social Psychology, 54(5), 872-879.

Muehlenhard, C. L., & McCoy, M. L. (1991). Double Standard/Double Bind. Psychology of Women Quarterly, 15(3), 447-461.

Schulhofer, S. J. (1998). Unwanted Sex : the Culture of Intimidation and the Failure of Law.

Smith, V. L. (1991). Prototypes in the Courtroom: Lay Representations of Legal Concepts. J. Personality & Social Psych., 61, 857-872. 

Wiederman, M. W. (2005). The Gendered Nature of Sexual Scripts. The Family Journal, 13(4), 496-502. doi: 10.1177/1066480705278729

Saturday
Sep202014

Weekend update: Who sees what & why in acquaintance rape cases?

I've been pondering the resurgence of attention to & controversy over the standards used, in the law generally and in particular institutions such as universities, to assess complaints of sexual assault.  I'll post some reflections next week, and also a guest blog from a scholar who has done a very interesting study on how cultural norms might be constraining the effectiveness of investigations of sexual assault complaints in the military. But by way of introduction, here is an excerpt from Culture, Cognition, and Consent: Who Sees What and Why in Acquaintance Rape Cases, 158 U. Penn. L. Rev. 729, a paper from way back in 2010 that reported the results of an empirical study of how cultural norms shape pereceptions of disputed facts in date rape cases and disputed empirical claims about the impact of competing legal standards for defining "consent."


Introduction

Does “no” always mean “no” to sex? More generally, what standards should the law use to evaluate whether a woman has genuinely consented to sexual intercourse or whether she could reasonably have been understood by a man to have done so? Or more basically still, how should the law define “rape”?  

These questions have been points of contention within and without the legal academy for over three decades. The dispute concerns not just the content of the law but also the nature of social norms and the interaction of law and norms. According to critics, the traditional and still dominant common law definition of rape—which requires proof of “force or threat of force” and which excuses a “reasonably mistaken” belief in consent—is founded on antiquated expectations of male sexual aggression and female submission.  Defenders of the common law reply that the traditional definition of rape sensibly accommodates contemporary practices and understandings—not only of men but of many women as well. The statement “no,” they argue, does not invariably mean “no” but rather sometimes means “yes” or at least “maybe.” Accordingly, making rape a strict-liability offense, or abolishing the need to show that the defendant used “force or threat of force,” would result in the conviction of nonculpable defendants, restrict the sexual autonomy of women as well as men, and likely provoke the refusal of prosecutors, judges, and juries to enforce the law.

This Article describes original, experimental research pertinent to the “no means . . . ?” debate. . . .

Conclusion

This Article has described a study aimed at investigating the contribution that cultural cognition makes to the controversy over how the law should respond to acquaintance rape. The results of the study suggest that common understandings of the nature of that dispute and what’s at stake in it are in need of substantial revision.

All of the major positions, the study found, misapprehend the source of the “no means ...?” debate. Disagreement over the significance the law should assign to the word “no” is not rooted in the self-serving perceptions of men conditioned to disregard women’s sexual autonomy. Nor is it a result of predictable misunderstanding incident to conventional indirection (or even misdirection) in the communication of consent to sex. Rather it is the product, primarily, of identity-protective cognition on the part of women (particularly older ones) who subscribe to a hierarchic cultural style. The status of these women is tied to their conformity to norms that forbid the indulgence of female sexual desire outside of roles supportive of, and subordinate to, appropriately credentialed men. From this perspective, token resistance is a strategy certain women who are insufficiently committed to these norms use to try to disguise their deviance. Because these women are understood to be misappropriating the status of women who are highly committed to hierarchical norms, the latter are highly motivated— more so even than hierarchical men—to see “no” as meaning “yes,” and to demand that the law respond in a way (acquittal in acquaintance- rape cases) that clearly communicates the morally deficient character of women who indulge inappropriate sexual desire.

This account also unsettles the major normative positions in the “no means . . . ?” debate. Because older, hierarchical women are the persons most likely to misattribute consent to a woman who says “no” and means it, abolishing the common law’s “force or threat of force” element and its “reasonable mistake” defense would not create tremendous jeopardy for convention-following men. Nevertheless, there is also little reason to believe that these reforms would enhance the sexual autonomy of women whose verbal resistance would otherwise be ignored. Cultural predispositions, the study found, exert such a powerful influence over perceptions of consent and other legally consequential facts that no change in the definition of rape is likely to affect results.

This conclusion, however, does not imply that the outcome of the “no means . . . ?” debate is of no moment. On the contrary, the role of cultural cognition helps to explain why the debate has persisted at such an intense level for so long. The powerful tendency of those on both sides to conform their perceptions of fact to their values suggests why thirty years worth of experience has not come close to forging consensus on what the consequences of reform truly are. Over the course of this period, the constancy of the cultural identities of those who plainly see one answer in the data and those who just as plainly see another has driven those on both sides to form their only shared perception: that the position the law takes will declare the winner in a battle for cultural predominance.

This particular battle, moreover, occupies only a single theater in a multifront war. Like the debate over rape-law reform, continuing disputes over the death penalty, gun control, and hate crimes all feature clashing empirical claims advanced by culturally polarized groups who see the law’s acceptance or rejection of their perceptions of how things work as a measure of where their group stands in society. Indeed, the same can be said about a wide range of environmental, public- health, economic, and national-security issues. It is impossible to formulate a satisfactory response to the debate over rape-law reform without engaging more generally the distinctive issues posed by illiberal status conflict over legally consequential facts. 

Friday
Sep192014

The more you know, the more you ... Climate change vs. GM foods

A correspondent writes:

I enjoyed your recent talk at Cornell University.  I was especially interested by your data that showed the more you know about climate change, the less you believe in it (if you are on the political  right).   Do you have any similar data that shows how information about GMOS shapes opinion based on political identifiers?

Would love to explore any studies you may have on GMOs

My response:

I wish!

On this topic, I've done nothing more than collect some data showing that there are no political divisions over -- or any other interesting sources of systematic variation in -- the attitudes of general public toward GMOs.  E.g., 

 Consider this (from nationally rep sample of 1500+ in summer 2013):

There's lots of research, though, showing that the vast majority of the public doesn't know anything of consequence about GM foods, a finding that, given efforts to rile them up, suggests a pretty ingrained lack of interest:

American consumers’ knowledge and awareness of GM foods are low. More than half (54%) say they know very little or nothing at all about genetically modified foods, and one in four (25%) say they have never heard of them.

Before introducing the idea of GM foods, the survey participants were asked simply ”What information would you like to see on food labels that is not already on there?” In response, most said that no additional information was needed on food labels. Only 7% of respondents raised GM food labeling on their own. . . .

Only about a quarter (26%) of Americans realize that current regulations do not require GM products to be labeled.

Hallman, W., Cuite, C. & Morin, X. Public Perceptions of Labeling Genetically Modified Foods. Rutgers School of Environ. Sci. Working Paper 2013-2001. 

You should also a look at this guest CCP post by Jason Delbourne, who you might also want to contact, who discusses the invalidity of drawing inferences about public opinion from opinion surveys under such circumstances.

One additional thing:

As you imply, our research group has found that science literacy in general & climate science literacy specifically both increase polarization; they don't have any meaningful general effect in inducing "less belief" in general -- their effect is big, but depends on "what sort of person" one is.  Relevant papers are Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735 & Climate Science Communication and the Measurement Problem, Advances Pol. Psych. (in press).  

On "science literacy" generally, consider:

On "climate science literacy," consider:

On GM foods, data I've collected shows that partisans become mildly less concerned w/ GM food risks as their science comprehension (or science literacy or however one wants to refer to it) increases:

 

Thursday
Sep182014

Will a "knowing disbeliever" be the next President (or at least Republican nominee)?

Subjects participate anonymously in CCP studies and supply responses in a form that prevents their being identified.

Still, I have to wonder whether Govr. Jindal might not have been one of the intriguing "knowing disbelievers" featured in The Measurement Problem study.

According to Howard Fineman,

America needs a leader to bridge the widening gulf between faith and science, and Louisiana Gov. Bobby Jindal, a devout Roman Catholic with Ivy League-level science training, thinks he can be that person. . . .

On Tuesday, Jindal showed his strategy for straddling the politics of the divide -- but also the political risks of doing so -- during an hourlong Q&A with reporters at a Christian Science Monitor Breakfast, a traditional early stop on the presidential campaign circuit.

Like the experienced tennis player he is, Jindal repeatedly batted away questions about whether he believes the theory of evolution explains the existence of complex life forms on Earth. Pressed for his personal view, Jindal -- who earned a specialized biology degree in an elite pre-med program at Brown University -- declined to give one. He said only that "as a parent I want my children taught the best science." He didn’t say what that "science" was.

He conceded that human activity has something to do with climate change, but declined to agree that there is now widespread scientific consensus on the severity and urgency of the problem.

Sounds a lot like a harassed "dualist" to me.

In truth, I don't think it is very convincing to use cultural cogntion & like dynamics, which are geared to making sense of the distribution of perceptions of risk and like facts in aggregate, to explain the beliefs of specified individuals, particularly politicians, whose reasoning and incentives for disclosing the same will be shaped by influences very different from those that affect ordinary members of the public.

But I think the spectacle of Jindal's predicament, including the fly-wing-plucking torment he & like-situated poltical figures on the right face in negotiating these issues in the media, definitely illustrates the discourse pathology diagnosed by The Measurement Problem: the relentless, pervasive pressure to force reasoning individuals to make a choice between using their reason to know what's known by science or using it to enjoy their identities as members of particular cultural communities.

There is something deeply disturbing about the demand that people give an account of how they can be "knowing disbelievers," and something deeply flawed about public institutions, whether in education or in politics, that insist on interfering with this apparently widespread and unremarkable way for people to apportion what they know and believe across the different integrated identities that they occupy. 

Escaping from this sort of dysfunction is what good educators do in order to teach evolution to culturally diverse students.  It's also what regions like S.E. Florida are doing to promote constructive political engagement with climate change among culturally diverse citizens....

But in any case, the real issue with Jindal should be how he thinks we could possibly expect nasty foreign terrorists to be afraid of us if we had a leader who insists on being called "Bobby" because his childhood hero was the youngest brother in the Brady Bunch.

 

h/t to my friend David Burns.

Saturday
Sep132014

Weekend update: geoengineering and the expanding confabulation frontier of the "climate communication" debate

Despite its astonishingly long run in grounding just-so story telling about public risk perceptions and science communication (e.g., the Rasputin "bounded rationality" account of public apathy), the "climate debate" at some point has to get the benefit of an infusion of new material or else the players will ultimately die out from terminal boredom. 

That's the real potential, of course, of geoengineering.

Critics took the early lead in the "science communication confabulation game" by proclaiming with absurd overconfidence that the technology could never work: climate is a classic "chaotic system" and thus too unpredictable to admit of self-conscious management (where have I heard that before?), and even talking about it will lull the public into a narcotic state of complacency that will undermine the political will necessary to curb the selfish ethos of consumption that is the root of the problem.

But as anyone who has played the confabulation game knows, even players of modest imagination can effectively counter any move by concocting a story of equal (im)plausibility that supports the opposite conclusion.

So now we are being bombarded with a torrent of speculations on the positive effects geoengineering is likely to have on public engagement with climate science: that talk of it will scare people into taking mitigation seriously;  that foreclosing its development will increse demand for adaptation alternatives that would be even more productive of action-dissipating false confidence; that implementation of geoengineering will avert the economic deadweight losses associated with mitigation, generating a social surplus that can be invested in new, lower-carbon energy sources, etc. etc etc

At least some of the issues about how geoengineering research might affect public risk perceptions can be investigated empirically, of course.

In one study, CCP researchers found that exposing subjects (members of nationally representative US and English samples) to information about geoengineering offset motivated resistance among individuals culturally predisposed to reject evidence of climate change.  Accordingly, on the whole, individuals exposed to this information were more likely to credit evidence on the risks of human-caused climate change than ones exposed to information about mitigation strategies.

But just as the "knowledge deficit" theory doesn't explain the nature of public opinion on climate change, so "knowledge deficit" can't explain the nature of climate-change advocacy.  If furnishing advocates facts about the dynamics of science communication were sufficient to ward them off their self-defeating styles of engaging the public, it would have worked by now.  Evidence that doesn't suit their predispositions on how to advocate is simply ignored, and evidence-free claims that do support it embraced with unreasoning enthusiasm.  

But it's important to realize that the spectacle of the "climate debate" is just a game.

Actually dealing with climate change isn't.  All over the place, real-world decisionmakers--from local govts to insurance companies to utilities to investors to educators formal & informal--are making decisions in anticipation of climate change impacts and how to minimize them.  

Many of these actors are using the best available evidence, not just on climate change but on climate-science communication.  And they are ignoring the game that non-actors engaged in confabulatory story-telling are engaged in.

If this were not the case--if the only game in town were the one being played by those for whom science communication is just expressive politics by other means-- the scientific study of science communication would indeed be pointless.

Friday
Sep122014

How should science museums communicate climate science? (lecture summary & slides)

I had the great privilege of participating in a conference, held at the amazing Museum of Science in Boston, on how museums can engage the public in climate science.  Below are my remarks--as best as I can remember them a week later.  Slides here.

You are experts on the design of science-museum exhibits.

I am not. Like Dietram, I study the science of science communication with empirical methods. 

I share his view that there are things he and I and others have learned that are of great importance for the design of science museum exhibits on climate change.

If you ask me, though, I won’t be able to tell you what to do based on our work—because I am not an expert at designing museum exhibits. 

But you are

So if in fact I am right to surmise that insights gleaned from the scientific study of science communication are relevant to design of climate science exhibits, you should be able to tell me what the implications of this work are for your craft.

I will thus share with you everything I know about climate science communication.

I’ve reduced it all to one sentence (albeit one with a semi-colon):

What ordinary members of the public “believe” about climate change doesn’t reflect what they know; it expresses who they are.

The research on which this conclusion rests actually originates in the study of public opinion on evolution.

One thing such research shows is that there is in fact no correlation whatsoever between what people say they believe about evolution and what they know about it.  Those who say they “believe” in evolution are no more or less likely to understand the elements of the modern synthesis—random mutation, genetic variance, and natural selection—than those who say they “don’t.” 

Indeed, neither is likely to be able to give a sufficiently cogent account of these concepts to pass a high school biology test.

Another thing scholars have learned from studying public opinion on evolution is that what one “believes” about it has no relationship to how much one knows about science generally.

I’ll show you some evidence on that.  It consists in the results of a science literacy test that I administered to a large nationally representative sample.

Like a good knowledge assessment should, this science comprehension instrument consisted of a set of questions that varied in difficulty.

Some, like “Electrons are smaller than atoms—true or false” were relatively easy: even an individual whose score placed him or her at the mean comprehension level, would have had about a 70% chance of getting that one right.

Other questions were harder: “Which gas makes up most of the Earth's atmosphere? Hydrogen, Nitrogen, Carbon Dioxide, Oxygen?”  Someone of mean science comprehension would have only about a 25% chance of getting that one.

If one looks at the item-response profile for “Human beings, as we know them today, developed from earlier species of animals—true or false?,” an item from the NSF’s Science Indicators battery, we see that it’s difficult to characterize it as either hard or difficult.  At the mean level of science comprehension, there is about a 55% chance that someone with an average level of science comprehension will get this one correct. But the probability of getting it right isn’t much different from that for respondents whose science comprehension levels are significantly lower or significantly higher than average.

The reason is that the NSF Indicator Evolution item isn’t a valid measure of science comprehension for a general-population sample of test takers. 

Its item-response profile looks sort of like what one might expect of a valid measure when we examine the answers of those members of the population who are below average in religiosity (as measured by frequency of prayer, frequency of church attendance, and self-reported importance of religion): that is, the likelihood of getting it right slopes upward as science comprehension goes up.

But for respondents who are above average in religiosity, there is no relationship whatsoever between their response to the Evolution item and their science comprehension level.

In them, it simply isn’t measuring the same sort of capacity that the other items on the assessment are measuring. What it’s measuring, instead, is their religious self-identity, which would be denigrated by expressing a “belief in” evolution. 

Among the ways one can figure this out, researchers have learned, is to change the wording of the Evolution item: if one adds to it the simple introductory clause, “According to the theory of evolution,” then the probability of a correct response turns out to be roughly the same in relation to varying levels of science comprehension among both religious and nonreligious respondents.

The addition of those words frees a religious respondent from having to choose between expressing who she is and revealing what she knows. It turns out she knows just as much—or just as little, really, since, as I said, responses to this item, no matter how they are worded, give us zero information on what the respondent understands about the theory of evolution.

But good high school teachers, empirical research show, can impart such an understanding just as readily in a student who says she “doesn’t believe in” evolution as those teachers can in a student who says he “does.” But the student who said she didn’t “believe in” evolution at the outset will not say she does when the course is over.

Her skillful teacher taught her what science knows; the teacher didn’t make her into someone else.

Indeed, insisting that students profess their “belief in” evolution, researchers warn, is the one thing guaranteed to prevent the religiously inclined student from forming a genuine comprehension of how evolution actually works.  If one forces a reasoning individual to elect between knowing what is known by science and being who she is, she will choose the latter.

The teacher who genuinely wants to impart understanding, then, creates a learning environment that disentangles information from identity, so that no one is put in that position.

What researchers have learned from empirical study of the teaching of evolution can be extended to the communication of climate science.

To start, just as it would be a mistake (is a mistake made over and over by people who ought to know better) to treat the fraction of the population who says they “disbelieve in” evolution as a measure of science comprehension in our society, so it is a mistake to treat the fraction who say they “disbelieve” in human-caused climate change as such a measure.

My collaborators and I have examined how people’s beliefs about climate change relate to their science comprehension, too.  Actually, there is a connection: as culturally diverse individuals’ scientific knowledge and reasoning proficiency improve, they don’t converge in their views about the impact of human activity on global temperatures.  Instead they become even more culturally polarized. 

Because what one “believes” about climate change is now widely understood to signify one’s membership in and commitment to one or another cultural group, and because their standing in these groups are important to people, individuals use all manner of critical reasoning ability, experiments show, to form and persist in beliefs consistent with their allegiances.

But that doesn’t necessarily mean that individuals who belong to opposing cultural groups differ in their comprehension of climate science.  This can be shown by examining how individuals of diverse outlooks do on a valid climate science comprehension assessment.

To design such an instrument, I followed the lead of the researchers who have studied the relationship between “belief in” evolution and science comprehension. They’ve established that one can measure what culturally diverse people understand about evolution with items that unconfound or disentangle identity and knowledge.  Like the evolution items that enable respondents to show what they know without making affirmations that denigrate who they are, the items in my climate literacy assessment focus on respondents’ understanding of the prevailing view among climate scientists and not on respondents' acceptance or rejection of climate change “positions” known to be highly correlated with cultural and political outlooks.

Some of these turn out to be very easy. Encouragingly, even the test-taker of mean climate-science comprehension is highly likely (80%) to recognize that adding  CO2 to the atmosphere increases the earth’s temperature.

Others, however, turn out to be surprisingly hard: there is only a 30% chance that someone of average climate-science comprehension believes that adding CO2 emissions associated with burning fossil fuels have been shown by scientists to reduce photosynthesis in plants.

Obviously, someone who gets that  CO2 is a “greenhouse gas” but who believes that human emissions of it are toxic to the things that grow in greenhouses can’t be said to comprehend much about the mechanisms of climate science.

Nevertheless, a decent fraction of the test takers from a general population sample turned out to have a very accurate impression of climate scientists’ current best understandings of the mechanisms and consequences of human-caused global warming.  Not so surprisingly, these were the respondents who scored the highest on a general science comprehension assessment.

Moreover, there was no meaningful correlation between these individuals’ scores and their political outlooks.  “Conservative Republicans” who displayed a high level of general science comprehension and “liberal Democrats” who did both scored highly on the climate assessment test.

Nevertheless, those who displayed the highest scores on the test were not more likely to say they “believed in” human-caused global warming those who scored the lowest. On the contrary, those displayed the greatest comprehension of science’s best prevailing understandings of climate change were the most politically polarized on whether human activity is causing global temperatures to rise.

In other words, what ordinary members of the public “believe” about climate change, like what they “believe” about evolution, doesn’t reflect what they know; it expresses who they are.

The reason our society is politically divided on climate change, then, isn’t that citizens have different understandings of what climate scientists think.  It is that our political discourse, like the typical public opinion poll survey, frames the “climate change question” in a manner that forces them to choose between expressing who they are, culturally speaking, and revealing and acting on what they know about what is known.

This is changing, at least in some parts of the country.  Despite being as polarized as the rest of the country, for example, the residents of Southeast Florida have, through a four-county compact, converged on a comprehensive “Climate Action Plan,” consisting of 100 distinct adaptation and mitigation measures.

People in Florida know a lot about climate.  They’ve had to know a lot, and for a long time, in order to thrive in their environment.

Like the good high-school teachers who have figured out how to create a classroom environment in which curious and reflective students don’t have to choose between knowing what’s known about the natural history of humans and being who they are,  the local leaders who oversee the Southeast Florida Climate Compact have figured out how to create a political environment in which free and reasoning citizens aren’t forced to choose between using what they know and being who they are as members of culturally diverse communities.

Now what about museums?  How should they communicate climate science?

Well, I’ve told you all I know about climate science communication: that what ordinary members of the public “believe” about climate change doesn’t reflect what they know; it expresses who they are.

I’ve shown you, too, some models how of science-communication professionals in education and in politics have used evidence-based practice to disentangle facts from the antagonistic cultural meanings that inhibit free and reasoning citizens from converging on what is collectively known.

I think that’s what you have to do, too.

Using your professional expertise, you have already made museums a place where curious, reflective people of diverse outlooks go to satisfy their appetite to experience the delight and awe of apprehending what we have come to know by employing science’s signature methods of discovery.  

You now need to assure that the museum remains a place, despite the polluted state of our science communication environment generally, where those same people can go to satisfy their appetite to participate in what science has taught us and is continuing to teach us about the workings of our climate and the impact of human activity upon it.

You need, in short, to be sure that nothing prevents them from recognizing that the museum is a place where they don’t have to choose between enjoying that experience and being who they are.

How can you do that?

I don’t know.  Because I am not an expert in the design of science museum exhibits.

But you are—and I am confident that if you draw on your professional judgment and experience, enriched with empirical evidence aimed at testing and refining your own hypotheses, you will be able to tell me.  

 I have a strong hunch, too, that what you will have to say will be something other science-communication professionals will be able to use to promote public engagement with climate science in their domains, too.

 

Sunday
Sep072014

Weekend update: Another helping of evidence on what "believers" & "disbelievers" do & don't "know" about climate science

Data collected in ongoing work to probe, refine, extend, make sense of, demolish the "ordinary climate science intelligence" assessment featured in The Measurement Problem paper.

You tell me what it means ...

Saturday
Sep062014

Weekend update: Some research on climate literacy to check out

I have a bunch of critical administrative tasks that are due/overdue.  Fortunately, I discovered this special "climate literacy" issue of the Journal of Geoscience Education.  It'll make for a weekend's worth of great reading.

Thinking that others might be in need of the same benefit, I decided to post notice of the issue forthwith.

Reader reports on one or another of the articles are certainly welcome.

Friday
Sep052014

Teaching how to teach Bayes's Theorem (& covariance recognition) -- in less than 2 blog posts!

Adam Molnar, in front of graphic heuristic he developed to teach (delighted) elementary school children how to solve Riemann hypothesisThe 14.7 billion regular readers of this blog know that one of my surefire tricks for securing genuine edification for them is for me to hold myself forward as actually knowing something of importance in order to lure/provoke an actual expert into intervening to set the record straight.  It worked again!  After reading my post Conditional probability is hard -- but teaching it *shouldn't* be!, Adam Molnar, a statistician and former college stats instructor who is currently completing his doctoral studies in mathematics education at the University of Georgia, was moved to compose this great guide on teaching conditional probability & covariance detection. Score!

 

Conditional Probability: The Teaching Challenge 

Adam Molnar

A few days ago, Dan wrote a post presenting the results on how members of a 2000-person general population sample did on two problems, named BAYES and COVARY.

Dan posed the following questions: 

  1. "Which"--COVARY or BAYES--"is more difficult?"
  2. "Which is easier to teach someone to do correctly?" and
  3. "How can it be that only 3% of a sample as well educated and intelligent as the one [he] tested"--over half had a college or post graduate dagree--"can do a conditional probability problem as simple as" he understood BAYES to be. "Doesn't that mean," he asked "that too many math teachers are failing to use the empirical knowledge that has been developed by great education researchers & teachers?"

Check out this cool poster summary of Molnar study resultsAs it turns out, these are questions that figure in my own research on effective math instruction. As part of my dissertation, I conducted interviews of 25 US high school math teachers. In the interviews, I included versions of both COVARY and BAYES. My version of COVARY described a different hypothetical experient but used the same numbers as Dan's, while BAYES had slightly different numbers (I used the version from Bar-Hillel 1980).

So with this background, I'll offer my responses to Dan's questions.

Which is more difficult?

According to actual results, Bayes by far.

Dan reports that 55% of the people in his  sample got COVARY correct, compared to 3% for BAYES.

Other studies have shown a similar gap.

In one Dan and some collaborators conducted, 41% of a nationally diverse sample gave the correct response to a similarly constructed covariance problem. Eighty percent of the members of my math teacher sample computed the correct response.

In contrast, on conditional-probability problems similar to BAYES, samples rarely reach double digits. I got 1 correct response out of 25--4%--in my math-teacher sample. Bar-Hillel (1980) asked Israeli students on the college entrance exam and had 6% correct. Only 8% of doctors got a similar problem right (Gigerenzer, 2002).

Teaching Covary

Solving COVARY, like many problems, involves three critical steps.

Step 1 is reading comprehension.

As worded, COVARY is not a long problem, but it includes a few moderately hard words like "experiment" and "effectiveness." These phrases may not challenge the "14.6 billion" readers of this blog, but they can challenge English language learners or students with limited reading skills. Even for people who know all the words, one might misread the problem.

Step 2 is recognition. In this problem, a solver needs to compare probabilities or ratios by knowing "more likely to survive" leads to likelihood, and that likelihood involves computation, not just comparing counts. Comparing counts across a row (223 against 75) or a column (223 against 107) will lead to the wrong answer.

Taking this step involves recognizing a term, "more likely to survive". Learning the term requires work, but the US education system includes this type of problem. In the Common Core adopted by most states, standard 8.SP.A.4 states "Construct and interpret a two-way table summarizing data on two categorical variables collected from the same subjects. Use relative frequencies calculated for rows or columns to describe possible association between the two variables." High school standard HSS.CP.A.4 repeats the tables and adds independence. Although students may not study under the Common Core, and adults had older curricula, almost everyone has seen 2 by 2 tables. Therefore, teaching the term "more likely to survive" is not a big step.

Step 3 is computation.

Dan suggested likelihood ratios, but almost all teachers will work with probabilities (relative frequencies) as mentioned in the standard. Problem solvers need to create two numbers and compare them. The basic "classical" way to create a probability is successes over total. The classical definition works as long as solvers remember to use row totals (298 and 128), not the grand total of 426. People will make errors, but as mentioned previously, US people have some familiarity with 2 by 2 tables. Instruction is required, but the steps do not include any brand new techniques.

Of the five errors in my sample, one came from misreading (Step 1), one came from recognition (Step 2) comparing 223 against 107, and three came from computation (Step 3) using the grand total of 426 as the denominator instead of 298 and 126.

Teaching Bayes

For BAYES, a conditional-probability problem, reading comprehension (Step 1) is more difficult than for COVARY. COVARY provides a table, while BAYES has only text. Errors will occur when transferring numbers from the sentences in the problem. Even very smart people make occasional transfer errors.

The best-performing teacher in my interviews made only one mistake--a transfer, choosing the wrong number from earlier in a problem despite verbally telling me the correct process.

As an educator, I would like to try a version of COVARY where the numbers appeared in text without the table, and see how often people correctly built tables or other problem solving structures.

Step 2, recognition, is easier. The problem explicitly asks for "chance (or likelihood)" which means probability to most people. Additionally, all numbers in the problem are expressed as percentages. These suggestions lead most people to offer some percentage or decimal number between 0 and 1. All the teachers in my study gave a number in that range.

Step 3, computation, is much, much harder.

As demonstrated in the recent sample and other research work including Bar-Hillel (1980), many people will just select a number from the problem, either the rate of correct identification or the base rate. Both values are between 0 and 1, inside the range of valid probability values, thus not triggering definitional discomfort. Neither value is correct, of course, but I am not surprised by these results. A correct solution path generally requires training.

Interestingly, the set of possible solution paths is much larger in Bayes. Covary had probabilities and ratios; Bayes has at least eight approaches. Some options might be familiar to US adults, but none are computationally well known. In the list below, I describe each technique, comment on level of familiarity, and mention computational difficulty.

  • Venn Diagrams: A majority of adults could recognize a Venn diagram, because they are useful in logic and set theory. Mathematicians like them. Although Venn diagrams are not specified in the Common Core, they have appeared in many past math classes and I suspect they will remain in schools. I do not believe a majority of adults could correctly compute probabilities with a Venn diagram, however. Doing so requires knowing conditional probability and multiplicative independence rules, plus properly accounting for the overlapping And event. Knowing how to solve the Bayes problem with a Venn diagram almost always means one knows enough to use at least one other technique on this list, such as probability tables or Bayes Theorem. Those techniques are more direct and often simpler.
  • Bayes's Theorem: (which has several different names, including formula, law, and rule; Bayes might end with 's or ' or no apostrophe at all). If you took college probability or a mathy statistics course, you likely saw this approach. When I asked statisticians in the UGA statistics education research group to work this problem, they generally used Bayes' rule. This is not a good teaching technique, however, because the computation is challenging. It requires solid knowledge of conditional probability and remembering a moderately difficult formula. Other approaches are less demanding. 
  • Bayesian updating: A more descriptive name for the approach Dan wrote about, where Posterior odds = prior odds x likelihood ratio. This is even more rare than the formula version of Bayes rule; I first saw this in my masters program. Updating is easier computationally than the formula, but I would not expect untrained people to discover it independently. 
  • Probability-based tables: Many teachers attempted this method, with some reaching a usable representation (but none correctly selecting numbers from the table.) This method requires setting up table columns and rows, and then using independence to multiply probabilities and fill entries. After that, the solver needs to combine values from two boxes (True Blue and False Blue) to find the total chance that Wally perceived a blue bus, and then find the true blue probability by dividing True Blue / (True Blue + False Blue). Computation requires table manipulation, understanding independence, and knowing which numbers to divide. Choosing the correct boxes stumped the teachers most often. They tended to just answer the value of True Blue, 9% in this version.

    This approach was popular because it involves tables and probabilities, ideas teachers and students have seen. Independence is also included in the Common Core. Thus, it's not too far a stretch. The problem is difficulty, between building the table using multiplicative probability and then combining boxes in a specific way. Other approaches are easier. 
  • Probability-based trees: The excellent British mathematics teaching site NRICH has an introduction. AP Statistics students frequently learn tree diagrams. Some teachers used them, including the one teacher who got the explanation completely correct. Several other teachers made the same mistake as with probability tables; they built the representation, but only gave the True Blue probability and neglected the False Blue possibility. 

    Although trees are mentioned briefly in the Common Core as one part of one Grade 7 standard, I don't expect trees to become a popular solution. Because they were uncommon in the past, few (but not zero) non-teacher adults would attempt this approach. 
  • Grid representations: Dan cited a 2011 paper by Spiegelhalter, Pearson, and Short, but the idea is older. A reference at Illuminations, the NCTM's US website for math teaching resources, included a 1994 citation. The idea is to physically color boxes represented possibilities, which allows one to find the answer by counting boxes. At Georgia, we've successfully taught grid shading in our class for prospective math teachers. It works well and it's not very difficult. One study showed that 75% of pictorial users found the correct response (Cosmides & Tooby, 1996) Unfortunately, it's never been part of any standards I know. It also requires numbers expressible out of 100, which works in this problem but not in all cases. 
  • Frequency-based tables: In the 1990s, psychological researchers started publishing about a major realization: Frequency counts are more understandable than probabilities. Classic papers include Gigerenzer (1991) and Cosmides & Tooby (1996). The basic idea is to convert probabilities to frequencies by starting with a large grand total, like 1000 or 100,000, and then multiply probabilities to find counts. The larger starting point makes it likely that all computations result in integers, one problem in grid representation. 

    After scaling, the solver can form a table. In this problem, getting from the table to the correct answer still requires work, as one must know to divide True Blue / (True Blue + False Blue) as in the probability-based table. I know one college textbook with a "hypothetical hundred thousand table", Mind on Statistics by Utts and Heckard, which has included the idea since at least 2003. There are many college statistics textbooks, though, and frequency-based tables do not appear in US school standards. They are not commonly known. 
  • Frequency-based trees: Because tables don't make it obvious which boxes to select, a tree-based approach can combine the natural intuition of counts and the visual representation of trees. This increases teaching time because students are less familiar with trees. In exchange, the problem becomes easier to solve. This might be the most effective approach to teach, but it's very new. Great Britain has included frequency trees and tables in the 2015 version of GCSE probability standards for all Year 10 and 11 students, but they have not appeared in schools on this side of the pond.

The Teaching Challenge

Neither COVARY nor BAYES is easy, because both require expertise beyond what was previously taught in K-12 schools.

In the current US system, looking at Common Core and other standards, COVARY will be easier to teach. COVARY requires less additional information because it can extend easily from two ideas already taught, count tables and classical relative frequency probability. It fits very well inside the Common Core standards on conditional probability.

BAYES has lots of possible approaches. Some, like grid representations and frequency trees, are less challenging than COVARY. But they are relatively new in academic terms. Many were developed outside the US and none extend easily from current US standards. I'm not even sure the sort of conditional-probability problem reflected in BAYES should be considered under Common Core (unlike the new British GCSE standards), even though I believe decision making under conditional uncertainty is a vital quantitative literacy topic. Most teachers and I believe it falls under AP Statistics.

Furthermore, educational changes take a lot of time. Hypothetically (lawyers like hypotheticals, right?), let's say that today we implement a national requirement for conditional probability. States would have to add it to their standards documents. Testing companies would need to write questions. Textbook publishers would have to create new materials. Schools would have to procure the new materials. Math teachers would need training; they're smart enough to handle the problems but don't yet have the experience.

The UK published new guidelines in November 2013 for teaching in September 2015 and exams in June 2017. In the US? 2020 would be a reasonable target.

Right now, Bayes-style conditional probability is unfamiliar to almost all adults.

In Dan's sample, over half had a college degree. That's nice, but that doesn't imply much about conditional probability.

The CBMS reports on college mathematics and statistics. A majority of college grads never take statistics. In 2010, there were about 500,000 enrollments in college statistics classes, plus around 100,000 AP Statistics test takers, but there were about 15,000,000 college students. (For comparison, there were 3,900,000 mathematics course enrollments.) Of the minority that take any statistics, most people take only one semester. Conditional probability is not a substantial part of most introductory courses; perhaps there would be 30 minutes on Bayes' rule.

Putting this together, less than 10% of 2010 college students covered conditional probability. Past numbers would not be higher, since probability and statistics have recently gained in popularity.

I think it's fair to say that less than 5% of the US adult population has ever covered the topic--making that 3% correct response rate sound logical.

In an earlier blog post, Dan wrote "If you don't get Bayes, it's not your fault. It's the fault of whoever was using it to communicate an idea to you." Yes, there are better and worse ways to solve Bayes-style problems. Teachers can and should use more effective approaches. That's what I research and try to help implement. But for the US adult population, the problem is not poor communication; rather, it's never been communicated at all.

References

Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44, 211Ð233.

Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all?: Rethinking some conclusions of the literature on judgment under uncertainty. Cognition, 58, 1-73.

Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond "heuristics and biases". In W. Stroebe & M. Hewstone (Eds.), European Review of Social Psychology (Vol. 2, pp. 83-115). Chichester: Wiley.

Gigerenzer, G. (2002). Calculated risks: How to know when numbers deceive you. New York: Simon & Schuster.

Spiegelhalter, D., Pearson, M., and Short, I. (2011). Visualizing Uncertainty About the Future. Science 333, 1393-1400.

Utts, J., & Heckard, R. (2012). Mind on Statistics, 4th edition. Independence, KY: Cengage Learning.

 

Thursday
Sep042014

Political psychology according to Krugman: A degenerative research programme if ever I saw one ... 

As I said, I no longer watch the show "Paul Krugman's Magic Motivated Reasoning Mirror" but do pay attention when a reflective person who still does tells me that I've missed something important.  Stats legend Andrew Gelman is definitely in that category.  He thinks the latest episode of KMMRM can't readily be "dismissed."  

So I've taken a close look.  And I just disagree.

My reasons can be efficiently conveyed by this simple reconstruction of the tortured path of illogic down which the show has led its viewers:

Krugman:  A ha! Social scientists have just discovered something I knew all along: on empirical policy issues, people fit the evidence to their political predispositions.  It’s blindingly obvious that this is why conservatives disagree with me!  And by the way, I’ve made another important related discovery about mass public opinion: the tribalist disposition of conservatives explains why they are less likely to believe in evolution.

Klein: Actually, empirical evidence shows that the tendency to fit the evidence to one's political predispositions is ubiquitous—symmetric, even: people with left-leaning proclivities do it just as readily as people with right-leaning ones.  Indeed, the more proficient people are at the sort of reasoning required to make sense of empirical evidence, the more pronounced this awful tendency is.  Therefore, people who agree with you are as likely to be displaying this pernicious tendency--motivated reasoning--as those who disagree.  This is very dispiriting, I have to say.

EmpiricistHe’s right.  And by the way, your claims about political outlooks and “belief in” evolution are also inconsistent with actual data.

Krugman: Well, that’s all very interesting, but your empirical evidence doesn’t ring true to my lived experience; therefore it is not true. Republicans are obviously more spectacularly wrong.  Just look around you, for crying out loud.

Klein:  Hey, I see it, too, now that you point it out! Republicans are more spectacularly wrong than Democrats!  We’ve been told by empiricists that individual Republicans and individual Democrats reason in the same way.  Therefore, it must be that the collective entity “Republican Party” is more prone to defective reasoning than the collective entity “Democrat.”

Methodological individualist: Look: If you believe Republicans/conservatives don’t reason as well as Democrat/liberals, then there’s only one way to test that claim: to examine how the individuals who say “I’m a ‘liberal’ ” and the ones who say “I’m a ‘conservative’ ” actually reason.  If the evidence says “the same,” then invoking collective entities who exist independently of the individuals they comprise and who have their own “reasoning capacity” is to jump out of the empirical frying pan and into the pseudoscience fire.  I’m not going with you.

Krugman: What I said—and have clearly been saying all along—is that the incidence of delusional reasoning is higher among conservative elites than among liberal elites. I never said anything about mass political opinion!  Your misunderstanding of what I clearly said multiple times is proof of what I said at the outset: the reason non-liberals (conservatives, centrists, et al.) all disagree with me is that they are suffering from motivated reasoning.

Bored observer: What is the point of talking with you?  If you make a claim that is shown to be empirically false, you just advance a new claim for which you have no evidence.  It’s obvious that no matter what the evidence says, you will continue to say that the reason anyone disagrees with you is that they are stupid and biased.  I’m turning the channel.

Gelman: Hold on!  He’s now advanced an empirical claim for which "data are not directly available."  Because it therefore cannot be evaluated, his claim can't simply be dismissed!

Two people Gelman knows know their shit:  Yes it can.  When people react to contrary empirical evidence by resorting to the metaphysics of supra-individual entities or by invoking new, auxiliary hypotheses that themselves defy empirical testing, they are doing pseudoscience, not genuine empiricism.  The path they are on is a dead end.

 

 

 

 

Monday
Sep012014

"Krugman's 'magic motivated reasoning mirror' show"-- I've stopped watching but not trying to learn from reflective people who still are 

So here is an interesting thing to discuss. 

A commenter on the What's to explain? Kulkarni on "knowing disbelief" post made an interesting connection between “knowing disbelief” (KD) and the “asymmetry thesis.”  The occasion for his comment, it’s apparent, was not or not only the Kulkarni post but rather something he saw on the show “Paul Krugman & the Magic Motivated Reasoning Mirror,” in every episode of which Krugman looks in the mirror & sees the images of those who disagree with him & never himself. 

There are lots of episodes—almost as many as in Breaking Bad or 24.  Consider: 

But the "Krugman's magic motivated reasoning mirror show" is way too boring, too monotonous, too predictable.  

I’ve stopped watching – hence didn’t even bother to say anything about the most recent episode or the one before that.

But the commentator had a really interesting point that wasn’t monotonous and that far from being predictable is bound up with things that I’m feeling quite uncertain about recently. So I’ve “promoted” his comment & my response to "full post status" -- & invite others to weigh in.

Mitch:

I think that this discussion skips over what is really interesting here - and which actually can be connected to what Krugman was talking about when he was so derided on this blog.

Let's consider the yellow population on the right-hand side of this chart. As presented here, these are people who are of well-above-average scientific understanding. They are therefore presumably aware of the truly vast array of evidence that supports the proposition that the earth is not 10,000 years old and that today's living creatures are descended from ancestors that were of different species.

Despite this, many in this group answer false to the first question posed (and presumably many also to the question, "True or false, the age of the earth is about 4.5 billion years").

Now this raises the question "Is there any question on which the blue population displays a like disregard of the scientific evidence of which they're aware?"

This question cannot be answered by the sorts of experiments I've seen on this blog. Having read at this point a good number of the posts, what I have seen demonstrated here is that people's minds do work in the same way - and that nobody likes to hear evidence that contradicts their beliefs. However, the question being asked is different - how is this way-of-the-mind playing out in practice by yellow and blue groups on the right-hand side of the chart?

My belief (and evidenly Krugman's as well) is that *at the present moment in the US* there in fact is no symmetry. These two groups believe quite differently - one generally aligning with the scientific consensus and the other not.

I think this is a pretty reasonable question, not worthy of derision.

September 1, 2014 | Unregistered Commenter

Me: 

@Mitch:

A. I agree the question -- of asymmetry -- is not worthy of derision. Derision, though, is worthy of derision, particularly when it assumes an answer to the question & evinces a stubborn refusal to engage with contrary evidence. There are many who subscribe to the the "asymmetry thesis" who are serious and open-minded people just trying to figure things out. Krugman isn't in that category. He is an illiberal zealot & an embarrassment to critically reasoning people of all cultural & political outlooks.

B. The point you raise is for sure getting at what is "interesting here" more than most of the other other comments on this & related posts. Thanks for pointing out the KD/asymmetry connection.... (But note that it it would actually be a mistake to conflate NSF "human evolution disbelievers" w/ "young earth creationists"--the latter make up only a subset of former.)

C. I admit (as I have plainly stated) that I find the relationship between KD & cultural cognition & like mechanisms unclear & even disorienting & unsettling. But I think conflating the whole lot would be a huge error. There are many forms of cultural cognition that don't reflect KD. It's also not clear -- to me at least -- that KD necessarily aggravates the pernicious aspects of cultural cognition. As in the case of the Pakistani Dr -- & the SE Floridians who don't believe in climate change but who use evidence of it for collective decisionmaking -- my hunch is that it is a resource that can be used to counteract illiberal forms of status competition that prevent diverse democratic citizens from converging on valid decision-relevant science.  Rather than extracting empty, ritualistic statements of "belief in" one or the other side's tribal symbol, the point of collective exchange should be to enable acquisition and use of genuine knowledge. It works in the classroom for teaching evolution, so why not use the same sort of approach in the town hall meeting (start there; work your way up) to get something done on climate? Take a look at The Measurement Problem & you'll see where I'm coming from. And if you see where it would make more sense for me to go instead, I'm all ears.

D. But while waiting for anything more you might say on this, let me try to put KD aside -- as I have indicated, I am using the "compartmentalization" strategy for now --& come back to Kruggie's "asymmetry thesis challenge" (made in the last episode of "PK's Magic Motivated Reasoning Mirror" that I bothered watching).

Krugman asks "what is the liberal equivalent of climate change for conservatives?"

Well, what does he mean exactly? If he means an example of an issue in which critical engagement with evidence on a consequential issue is being distorted by cultural cognition, the answer is ... climate change.

Just as there's abundant evidence that most of those who say they "believe in" evolution don't understand natural selection, random mutation & genetic variance (the elements of the modern synthesis in evolutionary science), the vast majority of those who say they "believe in" global warming don't genuinely get the most basic mechanisms of cliamte change (same for "nonbelievers" in both cases-- correlations between believing & understanding the evidence are zero).

It's actually okay to accept what one can't understand: in order to live well-- or just live--people need to accept as known by science much  more than they have time or capacity to comprehend! To make use of science, people use a rational faculty exquisitely calibrated to discerning who actually knows what science knows & who is full of shit.

But here's what's not okay: there's abundant evidence that those on both sides of the climate debate-- "believer" as well as "nonbeliver"--are now using their "what does science know" recognition faculty in a biased way that fits all evidence to their cultural predispositions.

That means that we have a real problem in our science communication environment--one that everyone regardless of cultural outlook has a stake in fixing.

So maybe you can see why I think it is very noxious—a sign of lack of civic virtue as well as critical reasoning ability-- to keep insisting that a conflict like climate change is a consequence of one side being "stupid" or "unreasoning" when it can be shown that both sides are processing information in the same way?  Why doing that is stupid & illiberal, and actually makes things worse by reinforcing the signals of cultural conflict that are themselves poisoning our "who knows what science knows" reasoning faculty

Do you think I'm missing something here?

 

Thursday
Aug282014

"Is politically motivated reasoning rational?" A fragment ...

From something in the works ...

My goal in this paper is to survey existing evidence on the mechanisms of culturally motivated reasoning (CMR) and assess what that evidence implies about the relationship between CMR and rational decisionmaking.

CMR refers to the tendency of individuals to selectively credit diverse forms of information—from logical arguments to empirical data to credibility assessments to their own sensory impressions—in patterns that reflect their cultural predispositions. CMR is conventionally attributed to  over-reliance on heuristic or System 1 information processing. Like other manifestations of bounded rationality, CMR is understood to interfere with individuals’ capacity to identify and pursue courses of action suited to attainment of their personal well-being (e.g., Lodge & Taber 2013; Weber & Stern 2011; Lilienfeld, Ammirati, Landfield 2009; Sunstein 2007).

I will challenge this picture of CMR.  Numerous studies using a variety of observational and experimental designs suggest that the influence of CMR is not in fact limited to heuristic information processing.  On the contrary, these studies find that in disputes displaying pervasive CMR—for example, over the reality and consequences of global warming—individuals opportunistically employ conscious, effortful forms of information processing, reliably deciphering complicated information supportive of their predispositions and explaining away the rest.  As a result, individuals of the highest levels of science comprehension, numeracy, cognitive reflection, and other capacities identified with rational decisionmaking exhibit the greatest degree of cultural polarization on contested empirical issues (Kahan in press; Kahan, Peters, Dawson & Slovic 2013; Kahan 2013; Kahan, Peters, et al. 2012). 

Because CMR is in fact accentuated by use of the System 2 reasoning proficiencies most closely identified with rational decisionmaking, it is not plausible, as a descriptive matter, to view CMR as a product of bounded rationality.

For the same reason, it is unsatisfying to treat decisionmaking characterized by CMR as unsuited to attainment of individual ends. The compatibility of any form of information processing with instrumental rationality cannot be assessed without a defensible account of the goals an actor is seeking to achieve by engaging with information in a particular setting. To be sure, CMR is not a form of information processing conducive to maximizing accurate beliefs.  But the relationship between CMR and the forms of cognition most reliably calibrated to using information to rationally pursue one’s ends furnishes strong reason to doubt that maximizing accuracy of belief is the goal individuals should be understood to be pursuing in settings that bear the signature of pervasive CMR.

One way to make sense of the nexus between CMR and system 2 information processing, I will argue, is to see CMR as a form of reasoning suited to promoting the stake individuals have in protecting their connection to, and status within, important affinity groups.  Enjoyment of the sense of partisan identification that belonging to such groups supplies can be viewed as an end to which individuals attach value for its own sake.  But a person’s membership and good standing in such a group also confers numerous other valued benefits, including access to materially rewarding forms of social exchange (Akerlof & Kranton 2000). Thus, under conditions in which positions on societal risks and other disputed facts become commonly identified with membership in and loyalty to such groups, it will promote individuals’ ends to credibly convey (by accurately conveying (Frank 1988)) to others that they hold the beliefs associated with their identity-defining affinity groups. CMR is a form of information processing suited to attaining that purpose.

Individuals acquire this benefit at the expense of less accurate perceptions of societal risk. But holding less accurate beliefs on these issues does not diminish any individual's personal well-being. Nothing any ordinary member of the public does--as consumer, as voter, as public discussant--can have any material impact on climate change or a like societal risk.  Accordingly, no mistake he makes based on inaccurate perceptions of the facts can affect the level of risk faced by himself or anyone else he cares about. If there is a conflict between using his reasoning capacity to form truth-convergent beliefs and using it to form identity-convergent ones, it is perfectly rational for him to use it for the latter.

This account of the individual rationality of CMR, however, does not imply that this form of reasoning is socially desirable from an economic standpoint. It is reasonable to assume that accurate popular perceptions of risk and related facts will often display the features of a meta-collective good: particularly in a democratic form of government, reliable governmental action to secure myriad particular collective goods will depend on popular recognition of the best available evidence on the shared dangers and opportunities that a society confronts (Hardin 2009).  On an issue characterized by pervasive CMR, however, the members of diverse cultural groups will not converge on the best available evidence or not do as quickly as they should to secure their common interests (Kahan 2013).  Still, this threat to their well-being will not in itself alter the array of incentives that make it rational for individuals to cultivate and display a reasoning style that features CMR (Hillman 2010). Only some exogenous change in the association between positions on disputed facts and membership in identity-defining affinity groups can do that.

This conceptual framing of this tragedy of the science communications commons, the paper will suggest, is the principal benefit that economics can make to ongoing research on CMR.

 Refs

Akerlof, G. A., & Kranton, R. E. (2000). Economics and identity. The Quarterly Journal of Economics, 115(3), 715-753.

Frank, R. H. (1988). Passions within reason : the strategic role of the emotions. New York: Norton.

Hardin, R. (2009). How do you know? : the economics of ordinary knowledge. Princeton: Princeton University Press.

Hillman, A. L. (2010). Expressive behavior in economics and politics. European Journal of Political Economy, 26(4), 403-418.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M.  (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424.

Kahan, D.M. (in press). Climate science communication and the Measurement Problem. Advances in Pol. Psych. 

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P.  (2013). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving Debiasing Away: Can Psychological Research on Correcting Cognitive Errors Promote Human Welfare? Perspectives on Psychological Science, 4(4), 390-398. doi: 10.1111/j.1745-6924.2009.01144.x

Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge ; New York: Cambridge University Press.

Sunstein, C. R. (2007). On the Divergent American Reactions to Terrorism and Climate Change. Columbia Law Review, 107, 503-557.

Weber, E. U., & Stern, P. C. (2011). Public Understanding of Climate Change in the United States. Am. Psychologist, 66, 315-328. doi: 10.1037/a0023253

Wednesday
Aug272014

What's to explain? Kulkarni on "knowing disbelief"

As always, the investment in asking others for help in dispelling my confusion is paying off.

As the 15.5 billion regular readers of this blog (we’re up 1.5 billion with migration of subscribers from the recent cessation of posts in Russell Johnson’s theprofessor.com) know, I’ve been trying to get a handle on a phenomenon that I’m calling—for now & for lack of a better term—“knowing disbelief” (KD).

I've gotten various helpful tips in comments to the the original blog post & on a follow up, which itself featured some reflections by Steve Lewandowsky.

This time the help comes from Prajwal Kulkarni, a physicist who authors the reflective and provocative blog, “Do I need evolution?” 

I’ll tell you what he said, and what I have to say about what he said.  But first a bit of background – which, if you have seen all the relevant previous episodes, you can efficiently skip by scrolling down to the bolded red text.

1. KD consists in (a) comprehension of and assent to a set of propositions that (b) appear to entail a proposition one professes not to “believe.”

“What is going on in their heads?” (WIGOITH) is the shorthand I’m using to refer to my interest in forming a working understanding (a cogent set of plausible mechanisms that are either supported by existing evidence or admit of empirical testing) for KD.

In that spirit, I formulated a provisional taxonomy consisting of four species of KD: 

  1. FYATHYRIO (“fuck you & the horse you rode in on”), in which the agent (the subject of KD) merely feigns belief in a proposition she knows is not true for the sake of expressing an attitude, perhaps contempt or hostility to members of an opposing cultural group, the recognition of which actually depends on others recognizing that the agent doesn’t really believe it (“Obama was born in Kenya!”);

  2. compartmentalization, in which a belief, or a cluster of beliefs and evaluations (“same-sex relationships enrich my life”), and denial of the same (“homosexuality is a sin”) are both affirmed by the agent, who effortfully cordons them off through behavioral and mental habits that confine their appearance in consciousness to the discrete occasions in which he occupies unintegrated, hostile identities—a form of dissonance avoidance;

  3. partitioning, in which knowledge and styles of reasoning appropriate to the use of it are effectively indexed with situational triggers that automatically summon them to consciousness, creating the risk of the agent will “disbelieve” what she “knows” if an occasion for making use of that knowledge is not accompanied by the triggering condition (think of the expert who doesn’t recognize a problem as being of the type that demands her technical or specialized understanding); and

  4. dualism, in which the agent simultaneously “rejects” and “accepts” some proposition or set of propositions that admittedly have the same state-of-affairs referent but that constitute distinct mental objects individuated by reference to the uses he makes of them in occupying integrated identities, a task he performs without the experience of either “mistake” or “error” (a signature of the kind of bias distinctive of partitioning) or dissonance (the occasion for compartmentalization).

click on me-- or I will cancel the Labor Day 3-day weekend 2. I am most interested in dualism for two reasons.  The first is that I think it is the most plausible candidate explanation for the sort of KD that I believe explains the results in the Measurement Problem (Kahan in press), which reports on a study that found that climate change “believers” and climate change “skeptics” achieve equivalent scores on a “climate science comprehension” assessment test and yet, as indicated, form opposing “beliefs” about the existence of human-caused global warming (indeed, about the existence of global warming regardless of cause).  Indeed, I believe I actually encounter dualism all the time when I observe how diverse citizens who are polarized in their "beliefs in" global warming  use climate science that presupposes human-caused global warming when they make practical decisions.

The second is that I feel it is the member of the taxonomy of the psychological mechanisms that I least understand. It doesn’t answer the WIGOITH question but rather puts it for me in emphatic terms.

3. Here is where Prajwal Kulkarni helps me out.

As I adverted to, Kulkarni’s interest is in public opinion on evolution.  He has insights on KD because that’s another area in which we see KD.

Indeed, KD with respect to evolution supplies the prototype for the “dualism” variant of KD.

click me! c'mon--don't be afraid!As I’ve discussed 439 separate times on this blog, there is zero correlation between “belief in” evolution and the most rudimentary comprehension of the mechanisms of it as represented in the dominant, “modern synthesis” account in evolutionary science.  “Disbelievers” are as likely to comprehend natural selection, random mutation, and genetic variance (and not comprehend them; most on both “sides” of the issue don’t) as “believers.” 

Nor is there any connection between “belief in” evolution and science comprehension generally.

What’s more, “disbelief” is no impediment to learning evolutionary theory. Good teachers can teach smart “disbelieving” kids as readily as they can smart “believing” ones—but doing so doesn’t transform the former into the latter (Lawson & Worsnop 2006).

Indeed, “knowing disbelievers” of evolution can use what they know about the natural history of human beings.  This is the insight (for all of those who, like me I suppose, would otherwise be too obtuse just to notice this in everyday life) of Everhart and Hameed (2014) and Hameed (2014), who document that medical doctors from Islamic cultures simultaneously “reject” evolution “at home,” when they are occupying their identity as members of a religious community, and “accept” it “at work,” when they are occupying their identity—doing their jobs—as professionals.

They are displaying the “dualism” variant of KD.

In response to my admission that they are the occasion for WIGOITH on my part, Kulkarni asks whether I and others who experience WIGOITH are just too hung up on consistency:

I wonder if the problem is that Kahan thinks such people need to be explained in the first place. But why should people be consistent? Why even have that expectation? As Kahan himself notes, even scientists sometimes exhibit cognitive dissonance.

Perhaps we should start from the premise that everyone is intellectually inconsistent at times. Knowing disbelievers should no more need a “satisfying understanding” than amazing basketball players who can’t shoot free-throws. In sports we accept that athletic ability is complicated and can manifest itself in all sorts of unpredictable ways. No one feels the need to explain it because that just the way it is. Why don’t we do the same for intellectual ability?

If we did, we might then conduct research to account for the handful of people who are consistent all the time. Because that’s the behavior that needs explaining.

This is a very fair question/criticism!

Or at least it is to the extent that it points out that what motivates WIGOITH generally—in all instances in which we encounter KD—is an expectation of consistency in beliefs and like intentional states. 

Descriptively, we assume that the agent who harbors inconsistent beliefs is experiencing a kind of cognitive misadventure.  If she refuses to recognize the inconsistency or consciously persists in it, we likely will view her as irrational, a characterization that is as much normative—a person ought to hold consistent beliefs—as descriptive.

Maybe that stance is unjustified (Foley 1979).  In any case, it is rarely openly interrogated and as a result might be blinding us to how living with contradiction coheres with actions and ways of life that we would recognize as perfectly sensible for someone to pursue (although I think if we came to that view, we’d definitely still not think that contradictory beliefs are the “norm”—on the contrary, we’d still likely view them as a recurring source of misadventure and error and possibly mental pathology).

Still, I don’t think any such expectation or demand for “consistency” is what’s puzzling me about dualism!

The reason is that I don’t think there necessarily is any contradiction in the beliefs and related intentional states of the dualist.  For the Pakistani Dr., “the theory of evolution” he “rejects” and the “theory of evolution” he “accepts” are “entirely different things.” 

They appear the same to us, as (obtuse?) observers, because we insist on defining his beliefs with reference solely to their state-of-affairs referents (here, the theory of human being’s natural history that originates in the work of Darwin and culminates in the modern synthesis).

But as objects in the Dr’s inventory of beliefs, attitudes, and appraisals—as objects of reasoning that figure in his competent negotiation of the situations that confront him in one or another sphere of life—they are distinct.

Perhaps, to borrow a bit form the partitioning view, the objects are “indexed” with reference to the situational triggers that correspond to his identity “at home” as an individual with a religious identity” and to his identity “at work” as a medical professional. 

But unlike the expert who as a result of partitioning fails to access the knowledge (or know how) that she herself understands to be requisite to some task (perhaps responding to a brush fire (Lewendowsky & Kirsner 2000), the Dr doesn’t feel he has “made a mistake” when it is brought to his attention that he has “rejected” a proposition that he also “accepts.”  He says, in effect, that you have made a serious mistake in thinking what he rejects and accepts are the same thing just because they have the same state-of-affairs referent. 

I am wondering if he is right. 

Is there a cogent account of the psychology of KD under which we can understand the mental objects of the “theory of evolution” that the Dr “rejects” and the "theory of evolution" that he “accepts” to be distinct because they are properly individuated with reference to the use they play in his negotiating of the integrated set of identities (integrated as opposed to segregated, as in the case of the dissonance-experiencing compartmentalizing, closeted gay man).

If so, what is it?

Once we understand it, we can then decide what to make of this way of organizing the contents of one’s mind—whether we think it is “rational” or “irrational,” a cognitive ability that contributes to being able to live a good life or a constraining form of self-delusion & so forth.

I am grateful to Kulkarni for helping me to get clearer on this in my own thinking.

But I wonder now if he doesn’t agree that there is something very much worth explaining here.

Refs

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo. Edu. Outreach 6, 1-8 (2013).

Foley, R. Justified inconsistent beliefs. American Philosophical Quarterly, 247-257 (1979).

Hameed, S. Making sense of Islamic creationism in Europe. Unpublished manuscript (2014).

Kahan, D. M. Climate Science Communication and the Measurement Problem, Advances in Pol. Psych. (in press).

Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment.Journal of Research in Science Teaching 29, 143-166 (2006).

Lewandowsky, S., & Kirsner, Kim. Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition 28, 295-305 (2000).

Tuesday
Aug262014

Democracy & the science communication environment (video lecture)

Posted synopsis & slides a while back but for anyone who wants to watch the event (Cardiff Univ., Feb. 2014, here you go!

 

Monday
Aug252014

Lewandowsky on "knowing disbelief"

So my obsession with the WIGOITH (“What is going on in their heads”) question hasn’t abated since last week. 

The question is put, essentially, by the phenomenon of “knowing disbelief.” This, anyway, is one short-hand I can think of for describing the situation of someone who, on the one hand, displays a working comprehension of and assent to some body of evidence-based propositions about how the world works but who simultaneously, on the other, expresses-- and indeed demonstrates in consequential and meaningful social engagements-- disbelief in that same body of propositions.

One can imagine a number of recognizable but discreet orientations that meet this basic description. 

I offered a provisional taxonomy in an earlier post

  • “Fuck you & the horse you rode in on” (FYATHRIO), in which disbelief is feigned & expressed only for the sake of evincing an attitude of hostility or antagonism (“Obama was born in Kenya!”); 
  • compartmentalization, which involves a kind of mental and behavioral cordoning off of recognized contradictory beliefs or attitudes as a dissonance-avoidance strategy (think of the passing or closeted gay person inside of an anti-gay religious community);
  • partitioning, which describes the mental indexing of a distinctive form of knowledge or mode of reasoning (typically associated with expertise) via a set of situational cues, the absence of which blocks an agent’s reliable apprehension of what she “knows” in that sense; and
  • dualism, in which the propositions that the agent simultaneously “accepts” and “rejects” comprise distinct mental objects, ones that are identified not by the single body of knowledge that is their common referent but by the distinct uses the agent makes of them in inhabiting social roles that are not themselves antagonistic but simply distinct

The last of these is the one that intrigues me most. The paradigm is the Muslim physician described by Everhart & Hameed (2013): the “theory of evolution” he rejects  “at home” to express his religious identity is “an entirely different thing” from the “theory of evolution” he accepts and indeed makes use of “at work” in performing his medical specialty and in being a doctor.

But the motivation for trying to make sense of the broader phenomenon—of “knowing disbelief,” let’s call it—comes from the results  of the “climate science literacy” test—the “Ordinary climate science intelligence” assessment—described in the Measurement Problem (Kahan, in press).

Administered to a representative national sample, the OCSI assessment showed, unsurprisingly, that the vast majority of global-warming “believers” and “skeptics” alike have a painfully weak grasp of the mechanisms and consequences of human-caused climate change.

But the jolting (to me) part was the finding that the respondents who scored the highest on OCSI—the ones who had the highest degree of climate-science comprehension (and of general science comprehension, too)—were still culturally polarized in their “belief in” climate change.  Indeed, they were more polarized on whether human activity is causing global warming than were the (still very divided) low-scoring OCSI respondents.

What to make of this?

I asked this question in my previous blog post.  There were definitely a few interesting responses but, as in previous instances in which I’ve asked for help in trying to make sense of something that ought to be as intriguing and puzzling to “skeptics” as it is to “believers,” discussion in the comment section for the most part reflected the inability of those who think a lot about the “merits” of the evidence on climate change to think about anything else (or even see when it is that someone is talking about something else).

But here is something responsive. It came via email correspondence from Stephen Lewandowsky, who has done interesting work on “partitioning” (e.g., Lewandowsky & Kirsner 2000), not to mention public opinion on climate change:

1. FYATHYRIO. I think this may well apply to some people. I enclose an article [Wood, M.J., Douglas, K.M. & Sutton, R.M. Dead and Alive Beliefs in Contradictory Conspiracy Theories. Social Psychological and Personality Science 3, 767-773 (2012)] that sort of speaks to this issue, namely that people can hold mutually contradictory beliefs that are integrated only at some higher level of abstraction—in this instance, that higher level of abstraction is “fuck you” and nothing below that matters in isolation or with respect to the facts.

2. Compartmentalization. What I like about this idea is that it provides at least a tacit link to the toxic emotions that any kind of challenge will elicit from those people.

3. Partitioning. I think as a cognitive mechanism, it probably explains a lot of what’s going on, but it doesn’t provide a handle on the emotions.

4. Dualism. Neat idea, I think there may be something to that. The analogy of the Muslim physician works well, and those people clearly exist. Where it falls down is because the people engaging in dualism usually have some tacit understanding of that and can even articulate the duality. Indeed, the duality allows you to accept the scientific evidence (as your Muslim Dr hypothetically-speaking does) because it doesn’t impinge on the other belief system (religion) that one holds dear.

So what do I think? I am not sure but I can offer a few suggestions: First, I am not surprised by any sort of apparent contradiction because my work on partitioning shows that people are quite capable of some fairly deep contradictory behaviors—and that they are oblivious to it. Second, I think that different things go on inside different heads, so that some people engage in FYATHYRIO whereas others engage in duality and so on. Third, I consider people’s response to being challenged a key ingredient of trying to figure out what’s going on inside their heads. And I think that’s where the toxic emotion and frothing-at-the-mouth of people like Limbaugh and his ilk come in. I find those responses highly diagnostic and I can only explain them in two ways: Either they feel so threatened by [the mitigation of] climate change that nothing else matters to them, or they know that they are wrong and hate being called out on it—which fits right in with what we know about compartmentalization. I would love to get at this using something like an IAT

Anyhow, just my 2c worth for now..

I do find this interesting and helpful. 

But as I responded to Steve, I don’t think “partitioning,” which descirbes a kind of cognitive bias or misfire related to accessing expert knowledge, is a very likely explanation for the psychology of the "knowing disbelievers" I am interested in.

The experts who display the sort of conflict between "knowing" and "disbelieving" that Steve observes in his partitioning studies would, when the result is pointed out to them, likely view themselves as having made a mistake. I don't think that's how the high-scoring OCSI "knowing disbelievers" would see their own sets of beliefs.

And for sure, Steve's picture of the “frothing-at-the-mouth” zealot is not capturing what I'm interested in either.

He or she is a real type--and has a counterpart, too, on the “believer” side: contempt-fillled and reason-free expressive zealotry is as ideologically symmetric as any other aspect of motivated reasoning.

But the “knowing disbeliever” I have in mind isn’t particularly agitated by any apparent conflict or contradiction in his or her states of belief about the science on climate change, and feels no particular compulsion to get in a fight with anyone about it.

This individual just wants to be who he or she is and make use of what is collectively known to live and live well as a free and reasoning person.

Not having a satisfying understanding of how this person thinks makes me anxious that I'm missing something very important.   

References

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo. Edu. Outreach 6, 1-8 (2013).

Hameed, S. Making sense of Islamic creationism in Europe. Unpublished manuscript (2014).

Kahan, D. M. Climate Science Communication and the Measurement ProblemAdvances in Pol. Psych. (in press).

Lewandowsky, S., & Kirsner, Kim. Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition 28, 295-305 (2000).