follow CCP

Recent blog entries
Friday
Oct312014

"Who am I? Why am I here?" My (ongoing) trip to West Point

At West Point yesterday & today, where I'm giving talks & today co-teaching a criminal law class.

The military, it seems to me, is an institution that is ruthlessly self-evaluative & remarkably unambivalent -- to point of lacking any self-consciousness of the attitude it has adopted -- about use of empirical methods of self-assessment.

The questions & discussions are great & there are tons of really smart people here thinking about how to teach critical thinking & cultivate professional judgment.

The educational enviornment here is, I think, a token of how successfully the US military has adapted its practices and outlooks to the political culture of the Liberal Republic of Science.

I'm not an historian, of course, but it does seem to me that unpardonable damage has been done to our military by a civilian leadership that lacked these very commitments to empirical self-evaluation & liberal principles of self-government.

Some lecture slides:

What is 'cultural cognition'?  I'll show you!

“Motivated System 2 Reasoning”: Rationality in a Polluted Science Communication Environment



Wednesday
Oct292014

More discussion of SPBMC ...

Chris Mooney has written an interesting story in the Washington Post about the SPBMC paper on climate-science literacy and cultural cognition of global warming.  There is also interesting discussion appearing in the comments section after my own post on the paper--including an important and informative response by S[evenson].  

So today I'd rather see what others think about SPBMC & my response to it (including whether I'm missing something; wouldn't be first time!) than divert anyone to a new topic (like whether ebola dog should be released from quarantine etc).

Tuesday
Oct282014

What comes first--misinformation or the motivation to believe it? Some reflections on study design

 Unlike our myriad competitors, the CCP blog now & again gets genuine experts to come in & address complicated stuff that these commentators actually know something about. We've been criticized for this, but sometimes I'm too busy to write myself & have no choice.  Anyway, the following is an expert guest post from a commentator making his second "guest" appearance.  Kevin Arceneaux's last essay, Partisan Media Are Not Destroying America (while subsequently disproven by events), was the most popular post ever on this blog, being read by an estimated 19.3 billion readers.  Now he's back to address related issues on study design and causal inference in assessments of the impact of partisan news coverage on public opinion. Arceneaux is the author, with Martin Johnsonof the acclaimed Changing Minds, Changing Channels (Univ. Chicago Press 2013).


Kevin Acreneaux:

News and entertainment media have the dubious distinction of serving as both a whipping boy and a potential savior.  They are often treated as the source of many social ills.  Beauty magazines perpetuate unhealthy body images; political advertisements inveigle; partisan news programs mislead and confuse (especially if we happen to disagree with them).  We also imagine that their power can be put to good use.  Media can serve as a catalyst for positive change, however defined.

As seductive as these narratives are, the problem is that they are difficult to evaluate empirically.  How could this be?  In modern advanced democracies, like the United States, we are surrounded by media.  Traditional forms of mass media – newspapers, magazines, radio, television – operate along side newer forms of interactive media on the Internet.  Can’t we just observe how people respond to all these forms of news media?

We can certainly observe what people consume and what they do, but we can’t always infer the effects of media consumption on their behavior.  Observational research is inherently beset by many threats to causal inference, and the current media environment only makes it worse.  The study of media effects could easily be the poster child for the dictum that correlation does not necessarily imply causation.

The biggest hurdle to divining the effects of media from observational research is the fact that people, by and large, choose what to consume.  For instance, we know that conservatives say that they consume conservative media at higher rates than other Americans.  But because conservatives are consciously choosing to view conservative media and construct conservative networks on social media, it is difficult to sort out how much of their conservatism come from their personal predispositions and how much of it comes from the messages that they encounter.

To muddy the waters further, the ability to select among news and entertainment alternatives creates incentives for media producers to fashion content that will appeal to particular segments of the population.  To take a current day example, Fox News has received its fair share of criticism for how it has covered the threats posed by the Islamic State of Iraq and Syria (ISIS) and the Ebola epidemic.  It is easy to accuse Fox News and other media outlets of whipping up hysteria, but we must also entertain the possibility that they are just giving their viewers what they think they want.  People who are chronically worried about threats need a place to turn to for answers and outlets like Fox News are happy to oblige.

From the standpoint of causal inference, it is difficult to pinpoint the effects of Fox News, because people who aren’t predisposed to be worried about Ebola are happily consuming different media content and if, for some reason, they happened across Fox News coverage of the Ebola epidemic, they may find in more amusing than worrying.

The problem here is so bad that statisticians refer to it as the Fundamental Problem of Causal Inference.  In a nutshell, the only way we can really now the effect of media content to observe two states of the world: one where the person consumes it and one where the person does not.  Of course, that’s impossible.  The only way forward for intrepid researchers is to figure out how to construct a comparable group of people.  For example, people who are just like the ones who watch Fox News but who do not.  That is easier said than done.

Fancy statistical models that try to address the problem by accounting for people’s viewing preferences (i.e., “control variables”) can actually cause more harm than good.  At the very least, this approach rests on the strong assumption that one has accounted for everything, and we can never know if we have.

Another approach that fares a little better is observing the same people overtime.  In doing so, we can get a before and after take on their behavior.  Yet this approach also makes strong assumptions, too, and as Tobias Konitzer points out in a recent conference paper even if we can make those assumptions, we need lots of observations across time.  Panel surveys are rare and long-running panel surveys, even rarer.

Many scholars, including myself, have pointed to randomized experiments as a way forward.  Experiments use random assignment to construct comparable groups of individuals.  Some people are exposed to media content while others are not.  Because people were assigned to groups at random, we know that they should have similar tastes and similar responses.  So, if we see one group behaving differently than another, we can more credibly infer that the difference was caused by the treatment that we administered.

While randomized experiments do allow us to say with more confidence that exposure to, say, partisan news content causes people to do X, Y, or Z, it is also not a panacea.  For one, experimentalists generally construct comparable groups and then ask people to do things that they would not always do or expose them to things that they may not have encountered but for the intervention of the researcher.  Consequently, we cannot be certain that they would not behave differently if the treatment had unfolded through natural means.  Field experiments and natural “experiments” (i.e., observational designs that have plausibly exogenous treatments) do better on this score, but they are often difficult to employ.

Another limitation is that experiments are not particularly good at measuring the cumulative effect of media exposure, but rather at pinpointing the effect of a particular intervention.  So, the upshot here should be familiar: nothing is perfect and there is no silver bullet. It may be trite, but it is true.  We learn the most through the triangulation of methods. 

Monday
Oct272014

Unconfounding knowledge from cultural identity--as big a challenge for measuring the climate-science literacy of middle schoolers as grown ups

A friend (of the best sort—one who has “got your back” to protect you from entropy’s diabolical plan to deprive you of the benefits of advances in collective knowledge) sent me a very interesting new study:

Stevenson, K. T., Peterson, M. N., Bondell, H. D., Moore, S. E., & Carrier, S. J., Overcoming skepticism with education: interacting influences of worldview and climate change knowledge on perceived climate change risk among adolescents, Climatic Change, 126(3-4), 293-304 (2014).

I very much like the SPBMC  paper.

One cool thing about it is that it tests the influence of cultural predispositions on the global-warming beliefs of middle schoolers.   It’s not the only study that has adapted the cultural cognition worldview measures to students, but it’s one of only a few and the only one I know of that is applying the measures to kids this young.

Consistent with research involving adult subjects, SPBMC find that cultural outlooks—in particular “individualism”—predicts skepticism about climate change.

SPBMC decided not to use (or at least not to report results involving) the hierarchy-egalitarianism worldview measure (maybe they figured some of the items weren’t suited for minors; I could understand that).

Instead they used a “social dominance” one and found that it didn’t predict anything relating to climate change attitudes—also interesting.

But of course the most important & interesting thing is what  SPBMC have to say about the relationship between climate-literacy & acceptance/belief in human-caused global warming, & the influence of cultural individualism on the same.

I found this part of the paper extremely valuable & informative.  I have a strong feeling that they have mined only a portion of the rich deposits of knowledge that their data contain.

Nevertheless, I found myself unconvinced (at least at this point) that the results they reported had the significance that they attached to them.

SPBMC present two principal findings. One is that acceptance of human-caused climate change in their student sample was associated with higher climate-science literacy. 

The other is that climate-science literacy had a bigger impact on kids who were relatively individualistic.  That is, as those kids display higher levels of climate science literacy, the change in the probability that they will believe in human-caused climate change increases even more than it does in kids who are relatively “communitarian” as their science-literacy levels increase.

SPBMC infer from these findings that “[c]climate literacy efforts designed for adolescents may represent a critical strategy to overcoming climate change related challenges, given stable or declining concern among adults that is driven in part by entrenched worldviews.”

For adults, worldviews are well entrenched and exert considerable influence over climate change risk perception. During the teenage years, however, worldviews are still forming, and this plasticity may explain why climate change knowledge overcomes skepticism among individualist adolescents . . . .

I myself strongly agree with SPBMC that climate-science education can make a big contribution to overcoming cultural polarization on climate change—although for reasons that I think differ from those of SPBMC.   But put that aside for a second.

The problem, in my view, is that the measure of climate-science literacy that SPBMC constructed fails to address what existing research teaches us is the biggest challenge in measuring public understanding of climate science.

That challenge is how to unconfound or disentangle genuine knowledge from the positions people take by virtue of their cultural identity.  An assessment instrument must  overcome this challenge in order to be a valid measure of climate-science literacy.

In general, people’s perceptions of risk reflect affective appraisals—positive or negative—of the putative risk source (nuclear power, guns, vaccines, etc.).

For most people most of the time, these feelings don’t reflect their comprehension of scientific data or the like. On the contrary, how people feel is more likely to shape their assessments of all manner of information, which they can be expected to conform to their pro- or con-attitude toward the putative risk source.

In this circumstance, survey items that elicit people’s understandings of the risks and benfits associated with some activity or state of affairs are best understood as simply indicators of the unobserved or latent affective orientation that people have toward that activity or state of affairs.  That attitude is all they are genuinely measuring   (Loewenstein et al. 2001; Slovic et al. 2004).

This is a huge issue for measuring climate-science literacy.

Sadly, propositions of fact on climate change—like whether it is happening & whether humans are causing it—have become entangled in antagonistic cultural meanings, transforming them into badges of membership & loyalty to affinity groups of immense significance in people’s everyday lives.

Study respondents can thus be expected to answer questions relating to climate change in a manner that reflects the pro- or con- affective stance that corresponds to their cultural identities. 

If they are the sort of persons who are culturally predisposed to believe in human-caused global warming (or “accept” it; let’s be sure to avoid the confused & confusing idea that there’s an important distinction between “believing” something & “accepting” or “knowing” it), they will affirm pretty much any proposition that to them sounds like the sort of thing one who “believes in” climate change would say.

As a result, they’ll incorrectly agree that human-caused global warming will increase the incidence of skin cancer, that industrial sulfur pollutions are causing climate change, that water vapor traps more heat than any other greenhouse gas etc.

Their “acceptance” of human-caused global warming, in other words, doesn’t reflect knowledge of the basic mechanisms that drive climate change or of the scientific evidence for how they work.

Rather, it expresses who they are.

Study after study after study after study has demonstrated this  (Bostrom et al 1994 Reynolds et al. 2010; Tobler, Visschers & Siegrist 2012; Guy et al. 2014). 

To be valid, then, a climate-science literacy scale must successfully distinguish between respondents whose correct answers reflect only their identity-based affective orientation toward global warming from those whose correct answers show genuine climate-science comprehension. 

The only way to design such a scale is to include a sufficiently large number of appropriately weighted items for which the incorrect answers are likely to seem correct to someone who is culturally predisposed to believe in climate change but who lacks understanding of the scientific basis for that position.

Now here’s the most interesting thing: if one includes a mix of items that successfully distinguishes those who “accept” human-caused climate change based on their predispositions from those who genuinely get the mechanisms of climate change, then one will discover that those who don’t “accept” or “believe in” human-caused climate change know just as much about the mechanisms of climate change as those who say they do accept it.

For sure, most “skeptics” are painfully ignorant about climate change science.

But that’s true for most “believers” too!

Only a very small portion of the general public—consisting of individuals who score very high on a general science comprehension test—can consistently distinguish propositions that most expert climate scientists accept from propositions that sound like ones such experts might accept but that in fact are wholly out of keeping with the basic mechanisms and dynamics of global warming.

Yet even among these very climate-science literate members of the public, there is no consensus on whether global warming is occurring.  Just like their climate-science illiterate counterparts, their “beliefs” about human-caused global warming are predicted by their cultural identities (Kahan in press).

In sum, “acceptance” or “belief in” human caused global warming is not a valid indicator in them either. It is an indicator of who one is, culturally speaking—nothing more and nothing less.

Judging from the results they reported in their paper, at least, SPBMC did not construct a climate-science literacy measure geared to avoiding the “identity-knowledge” confound.

In fact, they actually selected from a larger battery of items  (Tobler, Visschers & Siegrist 2012) a subset skewed toward ones that a test-taker who is culturally predisposed to “believe in” human-caused global warming could be expected to answer correctly regardless of how much or little that person actually knows about the mechanisms of climate change (e.g., “For the next few decades, the majority of climate scientists expect  a warmer climate to increase the melting of polar ice, which will lead to an overall rise of the sea level ”;  “... an increase in extreme events, such as droughts, floods, and storms”; “... a cooling down of the climate”; “The decade from 2000 to 2009 was warmer than any other decade since 1850.”). 

SPBMC left out of their battery items that Tobler et al. (2012) and other studies have found believers in climate change are highly likely to get wrong (e.g., “For the next few decades, the majority of climate scientists expect  an increasing amount of CO2 risks will cause more UV radiation and therefore a larger risk for skin cancer”; “Water vapor is a greenhouse gas”; “In a nuclear power plant, CO2 is emitted during the electricity production”; “On short-haul flights (e.g., within Europe) the average CO2 emission per person and kilometer is lower than on long-haul flights (e.g., Europe to America).”)

By my count, only 3 of the 17-19 items SPBMC identify as ones included in their scale (there is a discrepancy in the number that they report using in the text and number that appear in the on-line supplementary information, where the item-wording appears) are ones that existing studies have shown were likely to elicit wrong answers from low climate-science comprehending respondents who are nonetheless culturally predisposed to believe in climate change (“the ozone hole is the main cause of the greenhouse effect [true-false]”; “For the next few decades, the majority of climate scientists expect a precipitation increase in every region worldwide”; “Carbon dioxide (CO2) is harmful to plants”).

If one constructs a “climate science literacy” scale like this, it is bound to correlate with “acceptance” of global warming because the scale will itself be measuring the same cultural predisposition that inclines people to accept human-caused global warming.

Indeed, included in the SPBMC scale were true-false items that measured acceptance of human-caused climate change

  • The increase of greenhouse gasses is mainly caused by human activities.
  • With a high probability, the increase of carbon dioxide (CO2) is the main cause of climate change.
  • Climate change is mainly caused by natural variations (such as changes in solar radiation and volcanic eruptions).

Obviously, if one is testing the hypothesis that acceptance/belief in human-caused global warming is caused by understanding of climate science, then the former must be defined independently of the latter. 

Because SPBMC put "acceptance" items in their climate literacy scale, their finding that global-warming acceptance is associated with climate-science literacy is circular.

The same problem, in my view, characterizes SPBMC’s finding on the relative impact of climate-science literacy on students who are relatively individualistic.

Again, SPBMC’s climate-science measure is itself measuring acceptance of human-caused climate change.

So for them to say (based on a correlational model) that “climate science literacy” has a bigger impact on individualists' willingness to “accept climate change” than it does on communitarians’ is equivalent (mathematically/logically) to saying: “Reducing climate-skepticism in cultural individualists who don't believe in climate change would have a bigger impact on their willingness to accept human-caused climate change than would reducing the skepticism of cultural communitarians who already believe in climate change. . . .”

Can’t argue with that—but only because it’s essentially a tautology.

The practical question has always been why individualists are so strongly predisposed to skepticism (and communitarians to belief—same thing).

There is already evidence that the cultural individualists who score highest on a valid climate-science literacy scale are not more likely than low-scoring cultural individualists to say they accept/believe in human-caused global warming.

Because it is unclear that SPBMC constructed a scale that measures knowledge & not just a pro-belief affective orientation—indeed, because their climate-science comprehension scale includes acceptance of human-caused climate change—it doesn’t support any inference that greater climate-science comprehension would have such an effect in culturally individualistic middle schoolers.

As I mentioned, I do believe that improving climate-science education would make a very big contribution to dissipating political polarization on global warming.

The reason isn’t that understanding climate-science in itself can be expected to induce people to say they “believe in” climate change.  Again, what people say about what they believe in climate change isn’t a measure of what they know; it is a measure of why they are.

But precisely for that reason, learning to teach kids climate science will require teachers to learn how to dispel from the classroom the toxic affiliation between climate change positions and identities that now divides adults in the political realm.  When teachers learn how to do that—as I’m confident they will—then we can apply those lessons more broadly to the political domain so that there too we can use what we know rather than fight over whose side the state is going to take in a mean, illiberal status competition.

Indeed, that SPBMC performed a study like this in an educational context fills me with deep admiration.  This is the sort of research we desperately need more of, in my view.

And notwithstanding the critique I’m offering, I’m convinced there is a lot that can be learned from this paper. 

In particular, I really really hope SPBMC will report more of their data—including the psychometric properties of their climate-science literacy scale and summary data on how scores actually are distributed in their sample. 

They’d certainly be welcome to do so in this blog!

Still, as a scholar grappling with the central psychometric issues involved in measuring climate science literacy, I just don’t think the particular results SPBMC have reported support the conclusions that they purport to draw.

I’m sure they’d agree with me, too, that scholars investigating these issues are obliged to speak up when they see a study that they think hasn’t fully addressed them.  If scholars don't do this out of some misplaced sense of politeness (or any other sensibility, for that matter, that constrains open and candid scholarly exchange), then science communicators and educators who are relying on empirical work to make informed judgments will end up making serious and costly errors.

It should also go without saying that it is a mistake to think peer review happens only before a paper is published.  If anything, that’s precisely when meaningful peer review begins.

Refs

Bostrom, A., Morgan, M. G., Fischhoff, B., & Read, D. (1994). What Do People Know About Global Climate Change? 1. Mental Models. Risk Analysis, 14(6), 959-970. doi: 10.1111/j.1539-6924.1994.tb00065.x

Guy, S., Kashima, Y., Walker, I., & O'Neill, S. (2014). Investigating the effects of knowledge and ideology on climate change beliefs. European Journal of Social Psychology, 44(5), 421-429.

Kahan, D.M.(in press).Climate science communication and the Measurement Problem. Advances  in Pol. Psych.

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).

Reynolds, T. W., Bostrom, A., Read, D., & Morgan, M. G. (2010). Now What Do People Know About Global Climate Change? Survey Studies of Educated Laypeople. Risk Analysis, 30(10), 1520-1538. doi: 10.1111/j.1539-6924.2010.01448.x

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004). 

Stevenson, K. T., Peterson, M. N., Bondell, H. D., Moore, S. E., & Carrier, S. J. (2014). Overcoming skepticism with education: interacting influences of worldview and climate change knowledge on perceived climate change risk among adolescents. Climatic Change, 126(3-4), 293-304.  

Tobler, C., Visschers, V. H. M., & Siegrist, M. (2012). Addressing climate change: Determinants of consumers' willingness to act and to support policy measures. Journal of Environmental Psychology, 32(3), 197-207. doi: http://dx.doi.org/10.1016/j.jenvp.2012.02.001

Sunday
Oct262014

New paper: "Laws of cognition, cognition of law"

This teeny weeny paper is for a special issue of the journal Cognition. The little diagrams illustrating how one or another cognitive dynamic can be understood in relation to a simple Bayesian information-processing model are best part, I think; I am almost as obsessed with constructing these as I am with generating the multi-colored "Industrial Strength Risk Perception Measure" scatterplots.

Tuesday
Oct072014

What I believe about teaching "belief in" evolution & climate change

I was corresponding with friend, someone who has done really great science education research, about the related challenges of teaching evolution & climate science to high school students.  

Defending what I've called the "disentanglement principle"-- the obligation of those who are responsible for promoting comprehension of science to create an environment in which free, reasoning people don’t have to choose between knowing what’s known and being who they are-- I stated that I viewed "the whole concept of 'believing' [as] so absurd . . . ."  

He smartly challenged me on this:

I must admit, however, that I do not find the concept of believing to be absurd. I for example, believe that I have been married to the same women since I was XX years old. I also believe that I have XX children. I also believe that the best theory to explain modern day species diversity is Darwin's evolution theory. I do not believe the alternative theory called creationism. Lastly, I believe that the Earth is warming due largely to human caused CO2 emissions. These beliefs are the product of my experience and a careful consideration of the alternatives, their predictions, and a comparison of those prediction and the evidence. This is not a matter of who I am ( for example it matters not whether I am a man or a women, straight or gay, black or white) as much as it is a matter of my understanding of how one comes to a belief in a rational way, and my willingness to not make up my mind, not to form a belief, until all steps of that rational way have been completed to the extent that no reasonable doubt remains regarding the validity of the alternative explanations that have been advanced. 

His response made me realize that I've been doing a poor job in recent attempts to explain why it seems to me that "belief in" evolution & global warming is the wrong focus for imparting and assessing knowledge of those subjects.

I don't think the following reply completely fixes the problem, but here is what I wrote back:

I believe you are right! 

In fact, I generally believe it is very confused and confusing for people to say "X is not a matter of belief; it's a fact ....," something that for some reason seems to strike people as an important point to make in debates about politically controversial matters of science. 

Scientists "believe" things based on evidence, as you say, and presumably view "facts" as merely propositions that happen to be worthy of belief at the moment based on the best available evidence. 

I expressed myself imprecisely, although it might be the case that even when I clarify you'll disagree.  That would be interesting to me & certainly something I'd want to hear and reflect on. 

What I meant to refer to  as "absurd" was the position that treats as an object of science education students' affirmation of "belief in" a fact that has been transformed by cultural status competition into nothing more than an emblem of affiliation. 

That's so in the case of affirmation of "belief in" evolution. To my surprise, actually, I am close to concluding that exactly the same is true at this point of affirmation of "belief in" global warming. 

Those who say they "believe in" climate change are not more likely to know anything about it or about science generally than those who say they don't "believe"-- same as in the case of evolution.  

Saying one "disbelieves" those things, in contrast, is an indicator (not a perfect one, of course) of having a certain cultural identity or style-- one that turns out to be unconnected to a person's capacity to learn anything.  

So those who say that one can gauge anything about the quality of science instruction in the US from the %'s of people who say that they "believe in" evolution or climate change is, in my view, seriously mistaken. 

Or so I believe--very strongly-- based on my current assessment of the best evidence, which includes [a set of extremely important studies] of the the effective teaching of evolution to kids who "don't believe" it.  I'd be hard pressed to identify a book or an article much less a paragraph that conveyed as much to me about the communication of scientific knowledge as this one: 

[E]very teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to [change students’ beliefs], not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern—a pattern that appears to be necessary for independent learning and critical thought.

 Maybe you now have a better sense of what I meant to call "absurd," but now it occurs to me too that "absurd" really doesn't capture the sentiment I meant to express.

It makes me sad to think that some curious student might not get the benefit of knowing what is known to science about the natural history of our (and other) species because his or her teacher made the understandable mistake of tying that benefit to a gesture the only meaning of which for that student in that setting would be a renunciation of his or her identity. 

It makes me angry to think that some curious person might be denied the benefit of knowing what's known by science precisely because an "educator" or "science communicator" who does recognize that affirmation of "belief in" evolution signifies identity & not knowledge nevertheless feels that he or she is entitled to exactract this gesture of self-denigration as an appropriate fee for assisting someone else to learn.

Such a stance is itself a form of sectarianism that is both illiberal and inimical to dissemination of scientific knowledge. 

I have seen that there are teachers who know  the importance of disentangling the opportunity to learn from the the necessity to choose sides in a mean cultural status struggle, but who don't know how to do that yet for climate science education.  They want to figure out how to do it; and they of course know that the way to figure it out is to resort to the very forms of disciplined observation, measurement, and inference that are the signatures of science.

I know they will succeed.  And I hope other science communication professionals will pay attention and learn something from them.

Sunday
Oct052014

Weekend update: New paper on why "affirmative consent" (in addition to being old news) does not mean "only 'yes' means yes" 

As I explained in a recent post, the media/blogosphere shit storm over the "affirmative consent" standard Calif just mandated for campus behavioral codes displays massive unfamiliarity with existing law & with tons of evidence on how law & norms interact.  

First, the "affirmative consent" standard isn't a radical "redefinition" of the offense of rape.  It's been around for three decades.

Second, contrary to what the stock characters who are today reprising the roles from the 1990s "sexual correctness" debate are saying, an "affirmative consent" standard certainly doesn't require a verbal "yes" to sexual intercourse. It simply requires communication of consent by acts or words.

Third, for exactly that reason it hasn't changed outcomes in cases in which decision makers--jurors, judges, university disciplinary board members, etc. -- assess date rape cases.

Because members (male & female) of certain cultural subcommunities subscribe to norms in which a woman can "consent" to sex despite saying "no," decisionmakers who interpret facts against the background of those norms will sill treat various forms of behavior -- including suggestive dress, consensual sexual behavior short of intercourse, etc.-- as "communicating" that a woman who says "no" really meant yes.  

When those individuals apply the "affirmative consent" standard, they reach the same result that they would have reached under the traditional common-law definition -- or indeed that they would have reached if they were furnished no definition of rape at all. 

Today I happened to come across an intersting new paper that presents a review of the literature on these dynamics & that adds a relevant analysis on how cultural norms influence testimony of the parties.

In Honest False Testimony in Allegations of Sexual Offenses, J. Villalobos, Deborah Davis, & Richard Leo explain why the same norms that influence decisionmakers' perceptions of "consent" in date rape cases--including ones in which a woman says no--are likely to shape the perceptions of the parties, whose conflicting "honest" testimony will create doubt on the part of decisionmakers. This dynamic, they conclude, helps explain why "cultural predispositions often outweigh legal definitions of sexual consent when individuals make assessments of whether consent has been granted."

People genuinely interested in this issue might want to read it.  

Those playing the stock characters in the media remake of the 1990s (and earlier) reform debate probably won't-- if they had any interest in what the law actually is and how social norms have constrained enforcement of reform formulations of rape, they'd have already been familiar with much of this literature & would have recognized that their positions are actually divorced from reality.

Again, changing behavior on campuses requires changing norms.  Moreover, rather than being an effective instrument for norm change, legal reforms--including affirmative consent standards-- have in the past been rendered impotent b/c of the impact that norms have in shaping decisionmakers' understanding of what those standards mean.

This is a hard issue.  

Maybe a reform like Estrich's "no means no" standard-- an irrebuttable presumption that uttering "no" constitutes lack of consent-- would actually change results by blocking decisionmakers' reliance on contrary social norms.  There's some experimental evidence that this is so.

Or maybe (as some argue) it would produce a backlash that would further entrench existing norms.

Accordingly, maybe the emphasis should be on trying to promote forms of behavior that, through one or another mechanism of social influence, will change norms on campuses.  

That sort of thinking is likely the motivation for the Obama Administration's new "It's on Us" social marketing  campaign.

But one thing is clear: nothing will change if people ignore evidence -- on what the law is, on social norms, and on what real-world experience shows about how the two interact -- and instead opt to engage this issue through platitudinous claims the only function of which is to signify whose "team" people are on in a culture conflict only tangentially connected to the problem at hand.

 

Thursday
Oct022014

What happens to Pat's perceptions of climate change risks as his/her science literacy score goes up?

A curious and thoughtful correspondent asks:

A while ago, I had read your chart with two lines in red and blue, showing the association between scientific literacy and opinion on climate change separately for liberals and conservatives. [A colleague] gave it favorable mention again in her excellent presentation at the * * * seminar today. 

The subsequent conversation reminded me that I had always wanted to see in addition the simple line chart showing the association between scientific literacy and opinion on climate change for all respondents (without breakdown for liberals and conservatives). Have you ever published or shared that? Please share chart, or, if you haven't ever run that one, please share the data?
Much thanks!

Sure!  

The line that plots the relationship for the sample as a whole will be exactly in between the other 2 lines.  The "right/left" measure is a composite Likert scale formed by summing the (standardized) responses to 5-point left-right ideology & 7-point party-identification measure. In the figures you are referring to, the relationship between science literacy and partisan identity is plotted separately for subjects based on their score in relation to the mean on that scale. 

I've added a line plotting "sample mean" relationship between global warming risk perceptions (measured on the "Industrial Strength Risk Perception Measure") to figures for two data sets, one in which subjects' science comprehension was measured with "Ordinary Science Intelligence 1.0" (used in the CCP Nature Climate Change study) & other in which the same was measured with OSI_2.0

click me for more detail! & for time of your life, I promise!

I'm sure you can see the significance (practical, as well as "statistical") of this display for the question you proposed, viz., "What's the impact of science literacy in general, for the population as a whole, controlling for partisanship, etc?" 

It's that the question has no meaningful answer.

The main effect is just a simple average of the opposing effects that science comprehension has on climate change risk perceptions (beliefs, etc) conditional on one's cultural identity (for which right-left political outlooks are only 1 measure of many). 

If the effect is "positive" or "negative," that just tells you something about the distribution of cultural-affinities, the relative impact of such affinities on risk perceptions, &/or differences in the correlation between science comprehension and cultural outlooks (which turn out to be trivially small, too) in that particular sample.

Maybe this scatterplot can get this point across visually:

 

In sum, because science comprehension interacts with cultural identity and b/c everyone identifies more or less with one or another cultural group, talking about the "main" effect is not a meaningful thing to do.  All one can say is, "the effect of science comprehension on perceptions of climate change risk depends on who one is." 

Or put it this way: the question, "What's the effect of science comprehension in general, for the population as a whole?" amounts to asking what happens to Pat as he/she becomes more science comprehending.  Pssssst . . . Pat doesn’t exist!

 

Again, I'm sure you get this now that you've seen the data, but it's quite remarkable how many people don't.  How many want to seize on the (trivially small) "main effect" & if it happens to be sloped toward their group's position, say "See! Smart people agree with our group! Ha ha! Nah, nah, boo, boo!" 

They end up looking stupid. 

Not just because anyone who thinks about this can figure out what I've explained about the meaninglessness of "main effect" when the data display this relationship. 

But also because when we see his relationship and when the "main effect" is this small, that effect is likely to shift direction the next time someone collects data, something that could happen for any of myriad non-consequential reasons (proportion of cultural types in the sample, random variation in the size of the interaction effect, slight modifications in the measure of science literacy). At that point, those who proclaimed themselves the "winners" of the last round of the "whose is bigger" game look like fools (they are, aren't they?).

But like I said, it happens over & over & over & over ....

But how about some more information about Pat? And about his/her cultural worldview & ideology & their effect on his/her beliefs about climate change?  Why not-- we all love Pat!

 

 

Monday
Sep292014

Why the science of science communication needs to go back to highschool (& college; punctuated with visits to museum & science film-making studio)

I got to be opening act for former Freud expert & current stats legend Andrew Gelman (who focused mainly on stats but so as not to disappoint expectations of 85% of audience did mention Freud) at SENCER symposium in DC

Of course, the audience really loved him b/c he spoke, among other things, about how commonplace yet weird it is that people who teach students about validity, reliability, sample selection & other essentials of empirical measurement never stop to examine whether the methods they are using to impart such knowledge are valid, reliable, informed by unbiased sample of observations etc. 

Degrading and ultimately destroying this “self-measurement paradox” is at the core of SENCER’s mission!

And as is often happens when one goes to war with an evil & devious enemy, “mission creep” is setting in—which as far as I'm concerned is a very good thing in SENCER's case.

We both got sent to Principal's office for spitball fight *we* didn't even start!An extension of one of the themes of SENCER’s summer institute, the session I did with Gelman was focused on how the self-measurement paradox affects self-government. Our democracy fails to make use of the best evidence it has on myriad issues—from the vaccination of adolescents for HPV to rising global temperatures—as a result of pervasive inattention to empirical evidence on how ordinary citizens come to know what’s known by science.

Or so I argued (slides here)—and I think Gelman was broadly in agreement, although he worried whether the thought-free, “which button do I push?” culture in the social sciences is rendering it incapable of helping us to gain any insight into these & other matters. . . .

For my part, though, I addressed the question essentially of where SENCER should be focusing its attention if it plans to “scale up” its focus from the science classroom, the museum, and the science programming studio to the democratic political arena. 

My answer: the science classroom, science museum, and science programming studio!

The argument wasn’t any variant of the “knowledge deficit” thesis—the idea that the reason we see persistent political conflict on issues issues like climate change or gun control is that people lack either familiarity with the best evidence on such issues or the capacity to make sense of it.

Rather it was that the sites of formal and informal science education[1] are ideal laboratories for studying how to counteract the dynamics now stifling constructive pubic enagement with policy-relevant science.

The basis of this claim is the central thesis of The Measurement Problem.  The data reported in that paper support the conclusion that what people believe about whether human activity is really causing global warming don’t reveal what they know but express who they are

In fact, the vast majority of climate change “believers” and climate change “skeptics” lack genuine comprehension of even the most elementary aspects of climate change science.  They actually get (believers and skeptics alike) that adding CO2 to the atmosphere heats the atmosphere—but think that CO2 emissions will kill plant life by stifling photosynthesis.  They all know (again, believers and skeptics) that climate scientists believe that increased global warming will result in coastal flooding—but mistakenly believe that climate scientists also think such warming will increase the incidence of skin cancer....

There is a small segment of highly science-literate citizens who can reliably identify what the prevailing scientific view is on the sources and consequences of climate change.  But they are no less polarized than the rest of society on whether human activity is causing global warming!

What people “believe” about global warming indicates, in a measurement sense, the sort of person they are in the same way that political party identification, religiosity, and cultural worldviews do.  The positions they take are, in fact, a way for them to convey their membership in & loyalty to affinity groups that are integral to their social status and to their simple everyday interactions.

Sadly, “who are you, whose side are you on?" is what popular political disoucrse on the "climate change question" measure, too. 

Al Gore is right that that climate debate is  a struggle for the soul of America” —and that is exactly the problem.  If we could disentangle the question “what do we know” from the question “whose side are you on,” then democratic engagement with the best evidence would be able to proceed.  Of course, at that point what to do would then still depend massively on what diverse people care about; but fashioning policy amidst differences of that sort is a perfectly ordinary part of democratic life.

But as I explained in the talk, this sort of reason-preempting entanglement of empirical facts in antagonistic cultural meanings is not new for science educators.  They’ve have to deal with it most conspicuously in trying to teach students about evolution. 

What people “believe” about evolution likewise has zero correlation with what people know about the scientific evidence on the natural history of human beings or about any other insight human beings have acquired by use of science’s signature methods of observation, measurement, and inference.  “Belief” and “disbelief,” too, are expressions of identity.

But precisely because that’s what they are—precisely b/c free and reasoning people predictably, understandably use their reason to form and persist in positions that advance their stake in maintaining bonds with others who share their outlooks—the teaching of evolution is fraught.  I’m not talking about the politics of teaching evolution; that’s fraught, too, of course.  I’m talking about the challenge that a high school or college instructor faces in trying to make it possible for students who live in a world where positions on evolution express who they are to actually acquire knowledge and understanding of what it is science knows about the natural history of our species.

To their immense credit, science education researchers have used empirical methods to address this challenge.  What they’ve discovered is that a student’s “disbelief” in evolution in fact poses no barrier whatsoever to his or her learning of how random mutation and genetic variance combine with natural selection to propel adaptive changes in the forms of living creatures, including humans. 

After mastering this material, the students who said they “disbelieved” still say they “disbelieve” in evolution.  That’s because what people say in response to the “do you believe in evolution” question doesn’t measure what they know; it measures who they are. 

Indeed, the key to enabling disbelievers to learn the modern synthesis, this research shows, is to disentangle those two things—to make it plain to students that the point of the instruction isn’t to make them change their “beliefs but to impart knowledge; isn’t to make them into some other kind of person but to give them evidence along with the power of critical discernment essential to make of it what they will.

In my SENCER talk, I called this the “disentanglement principle”: those who are responsible for promoting comprehension of science have to create an environment in which free, reasoning people don’t have to choose between knowing what’s known and being who they are.

That’s going to be a huge challenge for classroom science teachers as well as for museum directors and documentary filmmakers and other science-communication professionals as they seek to enable the public—all of it, regardless of its members diverse identities—to understand what science knows about climate.

And they have shown, particularly in the science education domain, that they know the value of using valid empirical methods to implement the disentanglement principle.

SENCER, because it is already very experienced in facilitating empirical investigation aimed at improving the craft norms of science educators, should definitely be supporting science educators, formal and informal, in meeting the challenge of figuring out how to disentangle “who are you, what side are you on” from “what do we know” in the communication of climate science.

And it should be doing exactly that, I argued, as a means of satisfying SENCER’s own goal of combatting the “self-measurement paradox” in democratic politics!

The entanglement problem that science educators (formal and informal) face is exactly the one that is impeding constructive public engagement with climate change and other culturally polarizing issues that turn on policy-relevant science.  How to disentangle identity and knowledge is exactly what those who study science communication in democratic politics need to investigate by valid empirical means.

Valid empirical study of these dynamics, moreover, demands designs and measures that actually engage them.

Too much of the work being done on public opinion & climate change, in my view, lacks this sort of validity.  Indeed, the mistake of thinking that “moving the needle” on “belief in climate change” by furnishing people with “information,” including the existence of “scientific consensus” on global warming (something polarized citizens already know) is a consequence of over-reliance on public opinion surveys that presuppose flawed theories about the nature of public conflict in this area.

In a series of recent posts, I discussed the concept of external validity—the correspondence, essentially, between study designs and the sort of real-world conditions that those studies are supposed to be modeling.

Neil Stenhouse very usefully supplemented the series with a discussion of the “translation science” methods featured in public health and other disciplines to bridge the inevitable gap between externally valid lab studies and the real-world settings to which lab insights need to be adapted (indeed, disregard of this issue is another serious deficit in current science of science communication work).

The dynamics that must be understood to implement the “disentanglement principle” in science classrooms, science museums, and science documentary studios are, in my view, the same ones that must be understood to dispel cultural polarization over decision-relevant science in democratic politics.  Accordingly, empirical investigations conducted in those educational settings are the ones most likely to be both  externally valid and amenable to adaptation to democratic policymaking via field-baed “translation science” studies.

To illustrate this point, I discussed in my talk how the “disentanglement principle” has informed CCP field studies conducted on behalf of the Southeast Florida Climate Compact, whose success, I think, reflects the skill of its members in focusing citizens' attention on the unifying question of “what do we know” & avoiding the divisive question “who are you, whose side are you on?” that dominates the national climate debate.

In sum, science of science communication researchres working on our democracy’s science communication problem need to go back to high school, and to college.  They should also be spending more time in museums and science filmmaking studios, collaborating with the professionals there on empirical investigation of efforts to implement the “disentanglement principle.”

Or at least how things now look to me.

What do you think? 


[1] actually, I think the concept of “informal science education” is kind of goofy; science museums and science tv & internet programming respond to the public’s appetite to apprehend what’s known, not a societal need for extension courses!)

Friday
Sep262014

Are military investigators culturally predisposed to see "consent" in acquaintance rape cases?

This is the last (unless it isn't) installment in a series of posts on cultural cogntion and acquaintance rape. The first excerpted portions of the 2010 CCP study reported in the paper Culture, Cognition, and Consent: Who Sees What and Why in Acquaintance Rape Cases, 158 U. Penn. L. Rev. 729 (2010). The next, drawing on the findings of that study, offered some reflections on the resurgence of interest in the issue of how to define "consent" in the law generally and in university disciplinary codes.

Below are a pair of posts. The first is by Prof. Eric Carpenter, who summarizes his important new study on how cultural predispositions could affect the perceptions of the military personnel involved in investigating and adjudicating rape allegations. The second presents some comments from me aimed at identifying a set of questions--some empirical and methodological, and some normative and political--posed by Carpenter's findings.

Culture, Cognition & Consent in the U.S. Military


The American military is in a well-publicized struggle to address its sexual assault problem.  In 1991, in the wake of the Tailhook scandal, military leaders repeatedly and publically assured Congress that they would change the culture that previously condoned sexual discrimination and turned a blind eye to sexual assault. 

Over the past two decades, new sexual assault scandals have been followed by familiar assurances, and Congress’s patience has finally run out.  As a result, the Uniform Code of Military Justice (UCMJ) is currently undergoing its most significant restructuring since it went into effect in 1951.  The critical issue who is going to make the decisions in these cases: commanders, as is the status quo, or somebody else like military lawyers or civilians.  

What does any of that have to do with the Cultural Cognition Project, you might ask?  Well, I was serving as a professor at the Army's law school when I read Dan's article, Culture, Cognition, and Consent

Those of us at the school were working very hard to train military lawyers and commanders on the realities of sexual assault and to dispel rape myths.  At a personal level, I was often frustrated by the resistance many people showed to this training, particularly the military lawyers.  I suspected this was because rape myths are rooted in deeply-held beliefs about how men and women should behave, and I could not reasonably expect to change those beliefs in a one-hour class.

One of Dan's findings, broadly summarized, was that those who held relatively hierarchical worldviews agreed to a lesser extent than those with relatively egalitarian worldviews that the man in a dorm-room rape scenario should be found guilty of rape. 

My reaction to his finding was a mixture of "ah-ha" and "uh-oh."  The military is full of hierarchical people. 

Continue reading

 
Is military cultural cognition the same as public cultural cognition? Should it be?


I’m really glad Eric Carpenter did this study.  I have found myself thinking about it quite a bit in the several weeks that have passed since I read it.  The study, it seems to me, brings into focus a cluster of empirical and normative issues critical for making sense of cultural cognition in law generally.  But because I think it’s simply not clear how to resolve these issues, I'm not certain what inferences—empirical or moral—can be drawn from Eric’s study.



 

Thursday
Sep252014

Date-rape debate deja vu: the script is 20 yrs out of date

There's definitely a new strategy being deployed to combat sexual assault on college campuses.

Along side it, however, is a debate that is neither new nor interesting.  

On the contrary, it features a collection of stock characters who appear to have spent the last twenty years at a Rip van Winkle slumber party.

The alarm bell that woke them up was the Obama Administration's two-prong initiative to reduce campus sexual assaults.

The first part aims to pressure universities to more aggressively enforce their own disciplinary rules against sexual assault.

The second seeks to activate campus social norms. The goal of the White House’s “It’s on Us” campaign is to promote a shared sense of responsibility, particularly among male students, to intervene personally when they observe conditions that seem ripe for coerceive sexual behavior.

The initiative reflects a sophisticated appreciation of what over a quarter century of evidence has shown about the limits of formal penalties in reducing the incidence of nonconsensual sex.

From the 1980s onward, numerous states enacted reforms eliminating elements of the traditional common law definition of rape that advocates (quite plausibly) thought were excusing men who disregard explicit, unambiguous verbal nonconsent (“No!”) to sex.

These reforms, empirical researchers have concluded, have had no observable impact on the incidence of rape (Clay-Warner & Burt 2005; Schulhofer 1998).

One likely reason is the tendencey of people to conform their understanding of legal definitions of familiar crimes—robbery, burglary, etc.-- to “prototypes” or socialized understandings of what those offenses consist in.  Change the legal definition, and people will still find the elements to be satisfied depending on the fit between the facts at hand and their lay prototype (Smith 1991).

A CCP study found exactly this effect for reform definitions of rape (Kahan 2010). 

In a mock jury experiment based on an actual rape prosecution, the likelihood subjects would vote to convict a male college student who had intercourse with a female student who he admitted was continually saying “no” was 58% among the large, nationally representative sample.

That probability did not vary significantly (in statistical or practical terms) regardless of whether the subjects were instructed to apply the traditional common law definition of rape (“sexual intercourse by force or threat of force without”); a “strict liability” alternative that eliminated the“reasonable mistake of fact defense”; or a reform standard, in use in multiple states, that both eliminates the "force or threat" element and the mistake of fact defense and in addition uses an "affirmative consent” standard (“words or overt actions indicating a freely given agreement to have sexual intercourse”).

Indeed, the likelihood that subjects instructed to apply one these standards would convict didn’t differ meaningfully from the likelihood that subjects furnished no definition of rape at all would.

Interestingly, if one looks at case law, the same effect seems to apply to judges.  When legislators reform one or another aspect of the common-law definition, courts typically reinterpret the remaining elements in a manner that constrains any expansion of the law's reach (Kahan 2010).

One could reasonably draw the conclusion that changing the rules won't work unless one first changes norms (Baker 1998).  I think that's what the Obama Administration believes.

The stock characters, in contrast, believe a lot of weird things wholly unconnected to the evidence on laws, norms, and sexual assault.

In a goofy NY Times Op-ed entitled “ ‘Yes’ Is Better Than ‘No,’ ” e.g., Gloria Steinem and Michael Kimmel incongruously call for replacing the “the prevailing standard” of “ 'no means no’ ” with the “affirmative consent” standard that California has recently mandated its state universities use.

To start, "No means no" is not the "prevailing standard." It isn't the law anywhere.

In addition, an "affirmative consent" standard, which is already being used in various jurisdictions, does not require an "explicit 'yes' " in order to support a finding of "consent.

What sorts of words and behavior count as communicating “affirmative, conscious, and voluntary agreement to engage in sexual activity" are for the jury or administrative factfinder to decide.  

If such a decisionmaker believes that women sometimes say "no" when they "really" do intend to consent to sex, then that judge, juror, or college disciplinary board member necessarily accepts the view that verbally protesting women can communicate "yes" by other means, such as dressing provocatively, voluntarily accompanying the alleged assailant to a secluded space, engaging in consensual behavior short of intercourse etc.

Because it doesn't genuinely constrain decisionmakers to treat "no" as "no" to any greater extent than it constrains men to do so, "affirmative consent," evidence shows, hasn't changed the outcomes in such cases.

In fact, the standard California is mandating for university disciplinary proceedings— “affirmative, conscious, and voluntary agreement to engage in sexual activity”—is not meaningfully different from the one that already exists in California penal law (“positive cooperation in act or attitude” conveyed “freely and voluntarily”). If there's a problem with the current standard, this one won't fix it.

The "affirmative consent" standard's failure to block reliance on the social understanding that "no sometimes means yes" is exactly the problem, according to some people who actually know what the law is and how it works. Their proposal, presented  by Susan Estrich in her landmark book Real Rape (1988), is that the law simply treat the uttering of the words "no" as  irrebuttable proof of lack of consent.  That would prevent decisionmakers from relying on social conventions implying that women can "voluntarily," "consciously," "freely," affirmatively" etc. communicate consent even when they say no.

The CCP study furnishes some support for thinking this sort of standard might well change something. In the mock juror experiment, the only standard that increased the probability that study participants would find the defendant guilty was Estrich's "no means no" standard.

It would be really useful to have some real-world evidence, too.  But again, far from being the "prevailing standard," "no means no" is not genuinely how any state defines lack of consent for sexual assault.

Are Kimmel & Steinam really arguing with those who propose such a standard? No; they simply aren't talking to anyone who actually knows what the law is or how it has worked for the last quarter century.

Same for those playing the other stock characters.

One of these is the deeply concerned law professor. Picking up the lines of a twenty-year old script, he assures us that he knows how very very serious rape is. Nevertheless, he is quite worried that the “vagueness” of requiring the affirmative consent standard will subject men who are behaving perfectly consistently with social convention to risk of punishment. Requiring proof of something clear like "force or threat of force" is essential to avoid such a perverse outcome.

Again, the reforms opposed by the angst-ridden professor have been in place in many jurisdictions for decades. They don’t change how juries and courts decide cases relative to the (equally vague!) traditional definition of the offense of rape or any other definition that is actually in use.  Because decisionmakers construe reform provisions consistent with the social prototype of rape that prevails in their communities, the deeply concerned law professor needn't worry that an affirmative consent standard will “unfairly surprise” a man who mistakenly infers that a woman who says "no" (over & over) actually means "yes!"

Then there is the “reactionary conservative” (a role still played by George Will).  He worries now (just as he did in 1993) that requiring affirmative consent is part of a plot to increas[e] supervision by the regulatory state that progressivism celebrates.” 

Hey-- grumpy old reactionary dude: just calm down. I'm pretty sure that if the "affirmative consent" standard were really a communist trojan horse, the Bolsheviks would have climbed out of it by now!

There’s also the character who has assumed the familiar role of “postmodern” super-liberated “vamp” feminist.  She remains concerned that the “unrealistic” and “vague” affirmative consent standard is going to actually restrict her autonomy by deterring liability-wary men from having sex with her.

She should calm down too—unless, of course, her goal is to get people to pay attention to her for reprising this trite role. Her right to have as much sex as she likes will not be affected in the slightest!

Indeed, those now playing the role of vamp, grumpy conservative, deeply disturbed law professor, and egalitarian rape-law reformer also seem to be unaware of the evidence on who does feel most threatened by rape law reform and why.

Despite the rhetoric one sometimes hears, the issue of whether “no” really should mean no for purposes of the law does not pit men against women.

The dispute is one between men and women who share one set of cultural outlooks and men and women who share another.

Looking at individual-level predictors, the CCP study found that members of the public who were relatively hierarchical in their cultural outlooks were substantially more likely than others to acquit of rape a man who admittedly disregarded the complainant’s repeated statement “no” than individuals who were culturally egalitarian. 

The disparity between these groups was unaffected by the legal standard the subjects were instructed to apply.

It was magnified, however, by gender: women with hierarchical values were the most likely to see the complainant as having consented despite her verbal protests.

The study hypothesized such a result based on other empirical work on the "token resistance" script. Based on survey and attitudinal data, this work suggested that individuals who subscribe to hierarchical norms attribute feigned resistance to a woman’s Wanna *see* what the raw data featured in the regression model look like? You always should!strategic intention to evade the negative reputational effects associated with defying injunctions against premarital or casual sex.

Although both male and female hierarchs resent this behavior, the latter are in fact the most aggrieved by it.  They understand the individual woman who resorts to “token resistance” as attempting to appropriate some portion of the status due  women who genuinely conform to hierarchical norms (Muehlenhard & Hollabaugh 1988; Muehlenhard & McCoy 1991; Wiederman 2005).

In the spirit of convergently validating these findings, the CCP mock juror experiment posited that women with hierarchical values—particularly older ones who already had acquired significant status—would be most predisposed to form perceptions of fact consistent with a legal judgment evincing social condemnation of women who resort to this form of strategic behavior.

That this proved to be so is perfectly consistent with the conventional wisdom among criminal defense attorneys, too.

Roy Black famously secured an acquittal for William Kennedy Smith through his adroit selection of a female juror who met this profile and who ended up playing a key role in steering the jury to a not guilty verdict in her role as jury foreperson.

Experienced defense lawyers know that when the college football payer is on trial for date rape, the ideal juror isn’t Kobe Bryant; it’s Anita Bryant.

Women with these hierarchical outlooks have played a major role in political opposition to rape-law reform too.

These are Todd Akin’s constituents, “women who think that they have in some ways become less liberated in recent decades, not more; who think that easy abortion, easy birth control and a tawdry popular culture have degraded their stature, not elevated it.” Because of the egalitarian meanings rape reform conveys, they see it as part and parcel of an assault on the cultural norms that underwrite their status.

To tell you the truth, I’m not sure if the stock characters in the carnival debate triggered by the Obama Administration’s initiative are unaware of all this or in fact are simply happy to be a part of it.

I don’t see the Administration Initiative itself, however, as part of the cultural-politics date rape debate. It's the product of thinking that takes account of the experience of the last quarter century. 

Again, precisely because experience has shown that changing the wording of rules is not an effective means for reducing the incidence of acquaintance rape, many serious commentators have concluded that changing attitudes is (Baker 1999).

The Obama Administration's “It’s on Us”  campaign bears the clear signature of this way of thinking. By exhorting male students, in particular, to accept responsibility to intervene when they sense conditions conducive to coercive sexual behavior, the campaign is intended to fill students’ social field of vision with vivid new prototypes to counteer the ones that constrain the use of rules to regulation nonconsensual sex.

The voluntary assumption of the burden to protect others from harm can be expected to inspire a reciprocal willingness on the part of others to do the same.

Examples of such intervention, against the background of common understanding of why it's now taking place, will evince a shared understanding that a form of conduct that many likely regarded as "consistent with social convention" is in fact one that others now see as a source of harm.

And observing concerted action of this kind will recalibrate the calculations of those who might previously have believed that behavior manifestly out of keeping with common expectations would evade censure.

In a community with reformed norms of this sort, new rules might well accompany changes in behavior, not because they supply new instructions for decisionmakers but because they reflect internalized understandings of what forms of conduct manifest violate the operative legal standard, whatever it happens to be.

Will this social-norm strategy work?

The Obama Administration Initiative will generate some useful evidence-- at least for those who actually pay attention to what happens when people try innovative measures to solve a difficult problem.

 

References

Baker, K. K. (1999). Sex, Rape, and Shame. B.U. L. Rev., 79, 663.

Clay-Warner, J., & Burt, C. H. (2005). Rape Reporting After Reforms: Have Times Really Changed? Violence Against Women, 11(2), 150-176. doi: 10.1177/1077801204271566

Estrich, S. (1987). Real rape. Cambridge, Mass.: Harvard University Press.

Kahan, D. M. (2010). Culture, Cognition, and Consent: Who Perceives What, and Why, in 'Acquaintance Rape' Cases. University of Pennsylvania Law Review, 158, 729-812. 

Muehlenhard, C. L., & Hollabaugh, L. C. (1988). Do Women Sometimes Say No When They Mean Yes? The Prevalence and Correlates of Women's Token Resistance to Sex. Journal of Personality & Social Psychology, 54(5), 872-879.

Muehlenhard, C. L., & McCoy, M. L. (1991). Double Standard/Double Bind. Psychology of Women Quarterly, 15(3), 447-461.

Schulhofer, S. J. (1998). Unwanted Sex : the Culture of Intimidation and the Failure of Law.

Smith, V. L. (1991). Prototypes in the Courtroom: Lay Representations of Legal Concepts. J. Personality & Social Psych., 61, 857-872. 

Wiederman, M. W. (2005). The Gendered Nature of Sexual Scripts. The Family Journal, 13(4), 496-502. doi: 10.1177/1066480705278729

Saturday
Sep202014

Weekend update: Who sees what & why in acquaintance rape cases?

I've been pondering the resurgence of attention to & controversy over the standards used, in the law generally and in particular institutions such as universities, to assess complaints of sexual assault.  I'll post some reflections next week, and also a guest blog from a scholar who has done a very interesting study on how cultural norms might be constraining the effectiveness of investigations of sexual assault complaints in the military. But by way of introduction, here is an excerpt from Culture, Cognition, and Consent: Who Sees What and Why in Acquaintance Rape Cases, 158 U. Penn. L. Rev. 729, a paper from way back in 2010 that reported the results of an empirical study of how cultural norms shape pereceptions of disputed facts in date rape cases and disputed empirical claims about the impact of competing legal standards for defining "consent."


Introduction

Does “no” always mean “no” to sex? More generally, what standards should the law use to evaluate whether a woman has genuinely consented to sexual intercourse or whether she could reasonably have been understood by a man to have done so? Or more basically still, how should the law define “rape”?  

These questions have been points of contention within and without the legal academy for over three decades. The dispute concerns not just the content of the law but also the nature of social norms and the interaction of law and norms. According to critics, the traditional and still dominant common law definition of rape—which requires proof of “force or threat of force” and which excuses a “reasonably mistaken” belief in consent—is founded on antiquated expectations of male sexual aggression and female submission.  Defenders of the common law reply that the traditional definition of rape sensibly accommodates contemporary practices and understandings—not only of men but of many women as well. The statement “no,” they argue, does not invariably mean “no” but rather sometimes means “yes” or at least “maybe.” Accordingly, making rape a strict-liability offense, or abolishing the need to show that the defendant used “force or threat of force,” would result in the conviction of nonculpable defendants, restrict the sexual autonomy of women as well as men, and likely provoke the refusal of prosecutors, judges, and juries to enforce the law.

This Article describes original, experimental research pertinent to the “no means . . . ?” debate. . . .

Conclusion

This Article has described a study aimed at investigating the contribution that cultural cognition makes to the controversy over how the law should respond to acquaintance rape. The results of the study suggest that common understandings of the nature of that dispute and what’s at stake in it are in need of substantial revision.

All of the major positions, the study found, misapprehend the source of the “no means ...?” debate. Disagreement over the significance the law should assign to the word “no” is not rooted in the self-serving perceptions of men conditioned to disregard women’s sexual autonomy. Nor is it a result of predictable misunderstanding incident to conventional indirection (or even misdirection) in the communication of consent to sex. Rather it is the product, primarily, of identity-protective cognition on the part of women (particularly older ones) who subscribe to a hierarchic cultural style. The status of these women is tied to their conformity to norms that forbid the indulgence of female sexual desire outside of roles supportive of, and subordinate to, appropriately credentialed men. From this perspective, token resistance is a strategy certain women who are insufficiently committed to these norms use to try to disguise their deviance. Because these women are understood to be misappropriating the status of women who are highly committed to hierarchical norms, the latter are highly motivated— more so even than hierarchical men—to see “no” as meaning “yes,” and to demand that the law respond in a way (acquittal in acquaintance- rape cases) that clearly communicates the morally deficient character of women who indulge inappropriate sexual desire.

This account also unsettles the major normative positions in the “no means . . . ?” debate. Because older, hierarchical women are the persons most likely to misattribute consent to a woman who says “no” and means it, abolishing the common law’s “force or threat of force” element and its “reasonable mistake” defense would not create tremendous jeopardy for convention-following men. Nevertheless, there is also little reason to believe that these reforms would enhance the sexual autonomy of women whose verbal resistance would otherwise be ignored. Cultural predispositions, the study found, exert such a powerful influence over perceptions of consent and other legally consequential facts that no change in the definition of rape is likely to affect results.

This conclusion, however, does not imply that the outcome of the “no means . . . ?” debate is of no moment. On the contrary, the role of cultural cognition helps to explain why the debate has persisted at such an intense level for so long. The powerful tendency of those on both sides to conform their perceptions of fact to their values suggests why thirty years worth of experience has not come close to forging consensus on what the consequences of reform truly are. Over the course of this period, the constancy of the cultural identities of those who plainly see one answer in the data and those who just as plainly see another has driven those on both sides to form their only shared perception: that the position the law takes will declare the winner in a battle for cultural predominance.

This particular battle, moreover, occupies only a single theater in a multifront war. Like the debate over rape-law reform, continuing disputes over the death penalty, gun control, and hate crimes all feature clashing empirical claims advanced by culturally polarized groups who see the law’s acceptance or rejection of their perceptions of how things work as a measure of where their group stands in society. Indeed, the same can be said about a wide range of environmental, public- health, economic, and national-security issues. It is impossible to formulate a satisfactory response to the debate over rape-law reform without engaging more generally the distinctive issues posed by illiberal status conflict over legally consequential facts. 

Friday
Sep192014

The more you know, the more you ... Climate change vs. GM foods

A correspondent writes:

I enjoyed your recent talk at Cornell University.  I was especially interested by your data that showed the more you know about climate change, the less you believe in it (if you are on the political  right).   Do you have any similar data that shows how information about GMOS shapes opinion based on political identifiers?

Would love to explore any studies you may have on GMOs

My response:

I wish!

On this topic, I've done nothing more than collect some data showing that there are no political divisions over -- or any other interesting sources of systematic variation in -- the attitudes of general public toward GMOs.  E.g., 

 Consider this (from nationally rep sample of 1500+ in summer 2013):

There's lots of research, though, showing that the vast majority of the public doesn't know anything of consequence about GM foods, a finding that, given efforts to rile them up, suggests a pretty ingrained lack of interest:

American consumers’ knowledge and awareness of GM foods are low. More than half (54%) say they know very little or nothing at all about genetically modified foods, and one in four (25%) say they have never heard of them.

Before introducing the idea of GM foods, the survey participants were asked simply ”What information would you like to see on food labels that is not already on there?” In response, most said that no additional information was needed on food labels. Only 7% of respondents raised GM food labeling on their own. . . .

Only about a quarter (26%) of Americans realize that current regulations do not require GM products to be labeled.

Hallman, W., Cuite, C. & Morin, X. Public Perceptions of Labeling Genetically Modified Foods. Rutgers School of Environ. Sci. Working Paper 2013-2001. 

You should also a look at this guest CCP post by Jason Delbourne, who you might also want to contact, who discusses the invalidity of drawing inferences about public opinion from opinion surveys under such circumstances.

One additional thing:

As you imply, our research group has found that science literacy in general & climate science literacy specifically both increase polarization; they don't have any meaningful general effect in inducing "less belief" in general -- their effect is big, but depends on "what sort of person" one is.  Relevant papers are Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735 & Climate Science Communication and the Measurement Problem, Advances Pol. Psych. (in press).  

On "science literacy" generally, consider:

On "climate science literacy," consider:

On GM foods, data I've collected shows that partisans become mildly less concerned w/ GM food risks as their science comprehension (or science literacy or however one wants to refer to it) increases:

 

Thursday
Sep182014

Will a "knowing disbeliever" be the next President (or at least Republican nominee)?

Subjects participate anonymously in CCP studies and supply responses in a form that prevents their being identified.

Still, I have to wonder whether Govr. Jindal might not have been one of the intriguing "knowing disbelievers" featured in The Measurement Problem study.

According to Howard Fineman,

America needs a leader to bridge the widening gulf between faith and science, and Louisiana Gov. Bobby Jindal, a devout Roman Catholic with Ivy League-level science training, thinks he can be that person. . . .

On Tuesday, Jindal showed his strategy for straddling the politics of the divide -- but also the political risks of doing so -- during an hourlong Q&A with reporters at a Christian Science Monitor Breakfast, a traditional early stop on the presidential campaign circuit.

Like the experienced tennis player he is, Jindal repeatedly batted away questions about whether he believes the theory of evolution explains the existence of complex life forms on Earth. Pressed for his personal view, Jindal -- who earned a specialized biology degree in an elite pre-med program at Brown University -- declined to give one. He said only that "as a parent I want my children taught the best science." He didn’t say what that "science" was.

He conceded that human activity has something to do with climate change, but declined to agree that there is now widespread scientific consensus on the severity and urgency of the problem.

Sounds a lot like a harassed "dualist" to me.

In truth, I don't think it is very convincing to use cultural cogntion & like dynamics, which are geared to making sense of the distribution of perceptions of risk and like facts in aggregate, to explain the beliefs of specified individuals, particularly politicians, whose reasoning and incentives for disclosing the same will be shaped by influences very different from those that affect ordinary members of the public.

But I think the spectacle of Jindal's predicament, including the fly-wing-plucking torment he & like-situated poltical figures on the right face in negotiating these issues in the media, definitely illustrates the discourse pathology diagnosed by The Measurement Problem: the relentless, pervasive pressure to force reasoning individuals to make a choice between using their reason to know what's known by science or using it to enjoy their identities as members of particular cultural communities.

There is something deeply disturbing about the demand that people give an account of how they can be "knowing disbelievers," and something deeply flawed about public institutions, whether in education or in politics, that insist on interfering with this apparently widespread and unremarkable way for people to apportion what they know and believe across the different integrated identities that they occupy. 

Escaping from this sort of dysfunction is what good educators do in order to teach evolution to culturally diverse students.  It's also what regions like S.E. Florida are doing to promote constructive political engagement with climate change among culturally diverse citizens....

But in any case, the real issue with Jindal should be how he thinks we could possibly expect nasty foreign terrorists to be afraid of us if we had a leader who insists on being called "Bobby" because his childhood hero was the youngest brother in the Brady Bunch.

 

h/t to my friend David Burns.

Saturday
Sep132014

Weekend update: geoengineering and the expanding confabulation frontier of the "climate communication" debate

Despite its astonishingly long run in grounding just-so story telling about public risk perceptions and science communication (e.g., the Rasputin "bounded rationality" account of public apathy), the "climate debate" at some point has to get the benefit of an infusion of new material or else the players will ultimately die out from terminal boredom. 

That's the real potential, of course, of geoengineering.

Critics took the early lead in the "science communication confabulation game" by proclaiming with absurd overconfidence that the technology could never work: climate is a classic "chaotic system" and thus too unpredictable to admit of self-conscious management (where have I heard that before?), and even talking about it will lull the public into a narcotic state of complacency that will undermine the political will necessary to curb the selfish ethos of consumption that is the root of the problem.

But as anyone who has played the confabulation game knows, even players of modest imagination can effectively counter any move by concocting a story of equal (im)plausibility that supports the opposite conclusion.

So now we are being bombarded with a torrent of speculations on the positive effects geoengineering is likely to have on public engagement with climate science: that talk of it will scare people into taking mitigation seriously;  that foreclosing its development will increse demand for adaptation alternatives that would be even more productive of action-dissipating false confidence; that implementation of geoengineering will avert the economic deadweight losses associated with mitigation, generating a social surplus that can be invested in new, lower-carbon energy sources, etc. etc etc

At least some of the issues about how geoengineering research might affect public risk perceptions can be investigated empirically, of course.

In one study, CCP researchers found that exposing subjects (members of nationally representative US and English samples) to information about geoengineering offset motivated resistance among individuals culturally predisposed to reject evidence of climate change.  Accordingly, on the whole, individuals exposed to this information were more likely to credit evidence on the risks of human-caused climate change than ones exposed to information about mitigation strategies.

But just as the "knowledge deficit" theory doesn't explain the nature of public opinion on climate change, so "knowledge deficit" can't explain the nature of climate-change advocacy.  If furnishing advocates facts about the dynamics of science communication were sufficient to ward them off their self-defeating styles of engaging the public, it would have worked by now.  Evidence that doesn't suit their predispositions on how to advocate is simply ignored, and evidence-free claims that do support it embraced with unreasoning enthusiasm.  

But it's important to realize that the spectacle of the "climate debate" is just a game.

Actually dealing with climate change isn't.  All over the place, real-world decisionmakers--from local govts to insurance companies to utilities to investors to educators formal & informal--are making decisions in anticipation of climate change impacts and how to minimize them.  

Many of these actors are using the best available evidence, not just on climate change but on climate-science communication.  And they are ignoring the game that non-actors engaged in confabulatory story-telling are engaged in.

If this were not the case--if the only game in town were the one being played by those for whom science communication is just expressive politics by other means-- the scientific study of science communication would indeed be pointless.

Friday
Sep122014

How should science museums communicate climate science? (lecture summary & slides)

I had the great privilege of participating in a conference, held at the amazing Museum of Science in Boston, on how museums can engage the public in climate science.  Below are my remarks--as best as I can remember them a week later.  Slides here.

You are experts on the design of science-museum exhibits.

I am not. Like Dietram, I study the science of science communication with empirical methods. 

I share his view that there are things he and I and others have learned that are of great importance for the design of science museum exhibits on climate change.

If you ask me, though, I won’t be able to tell you what to do based on our work—because I am not an expert at designing museum exhibits. 

But you are

So if in fact I am right to surmise that insights gleaned from the scientific study of science communication are relevant to design of climate science exhibits, you should be able to tell me what the implications of this work are for your craft.

I will thus share with you everything I know about climate science communication.

I’ve reduced it all to one sentence (albeit one with a semi-colon):

What ordinary members of the public “believe” about climate change doesn’t reflect what they know; it expresses who they are.

The research on which this conclusion rests actually originates in the study of public opinion on evolution.

One thing such research shows is that there is in fact no correlation whatsoever between what people say they believe about evolution and what they know about it.  Those who say they “believe” in evolution are no more or less likely to understand the elements of the modern synthesis—random mutation, genetic variance, and natural selection—than those who say they “don’t.” 

Indeed, neither is likely to be able to give a sufficiently cogent account of these concepts to pass a high school biology test.

Another thing scholars have learned from studying public opinion on evolution is that what one “believes” about it has no relationship to how much one knows about science generally.

I’ll show you some evidence on that.  It consists in the results of a science literacy test that I administered to a large nationally representative sample.

Like a good knowledge assessment should, this science comprehension instrument consisted of a set of questions that varied in difficulty.

Some, like “Electrons are smaller than atoms—true or false” were relatively easy: even an individual whose score placed him or her at the mean comprehension level, would have had about a 70% chance of getting that one right.

Other questions were harder: “Which gas makes up most of the Earth's atmosphere? Hydrogen, Nitrogen, Carbon Dioxide, Oxygen?”  Someone of mean science comprehension would have only about a 25% chance of getting that one.

If one looks at the item-response profile for “Human beings, as we know them today, developed from earlier species of animals—true or false?,” an item from the NSF’s Science Indicators battery, we see that it’s difficult to characterize it as either hard or difficult.  At the mean level of science comprehension, there is about a 55% chance that someone with an average level of science comprehension will get this one correct. But the probability of getting it right isn’t much different from that for respondents whose science comprehension levels are significantly lower or significantly higher than average.

The reason is that the NSF Indicator Evolution item isn’t a valid measure of science comprehension for a general-population sample of test takers. 

Its item-response profile looks sort of like what one might expect of a valid measure when we examine the answers of those members of the population who are below average in religiosity (as measured by frequency of prayer, frequency of church attendance, and self-reported importance of religion): that is, the likelihood of getting it right slopes upward as science comprehension goes up.

But for respondents who are above average in religiosity, there is no relationship whatsoever between their response to the Evolution item and their science comprehension level.

In them, it simply isn’t measuring the same sort of capacity that the other items on the assessment are measuring. What it’s measuring, instead, is their religious self-identity, which would be denigrated by expressing a “belief in” evolution. 

Among the ways one can figure this out, researchers have learned, is to change the wording of the Evolution item: if one adds to it the simple introductory clause, “According to the theory of evolution,” then the probability of a correct response turns out to be roughly the same in relation to varying levels of science comprehension among both religious and nonreligious respondents.

The addition of those words frees a religious respondent from having to choose between expressing who she is and revealing what she knows. It turns out she knows just as much—or just as little, really, since, as I said, responses to this item, no matter how they are worded, give us zero information on what the respondent understands about the theory of evolution.

But good high school teachers, empirical research show, can impart such an understanding just as readily in a student who says she “doesn’t believe in” evolution as those teachers can in a student who says he “does.” But the student who said she didn’t “believe in” evolution at the outset will not say she does when the course is over.

Her skillful teacher taught her what science knows; the teacher didn’t make her into someone else.

Indeed, insisting that students profess their “belief in” evolution, researchers warn, is the one thing guaranteed to prevent the religiously inclined student from forming a genuine comprehension of how evolution actually works.  If one forces a reasoning individual to elect between knowing what is known by science and being who she is, she will choose the latter.

The teacher who genuinely wants to impart understanding, then, creates a learning environment that disentangles information from identity, so that no one is put in that position.

What researchers have learned from empirical study of the teaching of evolution can be extended to the communication of climate science.

To start, just as it would be a mistake (is a mistake made over and over by people who ought to know better) to treat the fraction of the population who says they “disbelieve in” evolution as a measure of science comprehension in our society, so it is a mistake to treat the fraction who say they “disbelieve” in human-caused climate change as such a measure.

My collaborators and I have examined how people’s beliefs about climate change relate to their science comprehension, too.  Actually, there is a connection: as culturally diverse individuals’ scientific knowledge and reasoning proficiency improve, they don’t converge in their views about the impact of human activity on global temperatures.  Instead they become even more culturally polarized. 

Because what one “believes” about climate change is now widely understood to signify one’s membership in and commitment to one or another cultural group, and because their standing in these groups are important to people, individuals use all manner of critical reasoning ability, experiments show, to form and persist in beliefs consistent with their allegiances.

But that doesn’t necessarily mean that individuals who belong to opposing cultural groups differ in their comprehension of climate science.  This can be shown by examining how individuals of diverse outlooks do on a valid climate science comprehension assessment.

To design such an instrument, I followed the lead of the researchers who have studied the relationship between “belief in” evolution and science comprehension. They’ve established that one can measure what culturally diverse people understand about evolution with items that unconfound or disentangle identity and knowledge.  Like the evolution items that enable respondents to show what they know without making affirmations that denigrate who they are, the items in my climate literacy assessment focus on respondents’ understanding of the prevailing view among climate scientists and not on respondents' acceptance or rejection of climate change “positions” known to be highly correlated with cultural and political outlooks.

Some of these turn out to be very easy. Encouragingly, even the test-taker of mean climate-science comprehension is highly likely (80%) to recognize that adding  CO2 to the atmosphere increases the earth’s temperature.

Others, however, turn out to be surprisingly hard: there is only a 30% chance that someone of average climate-science comprehension believes that adding CO2 emissions associated with burning fossil fuels have been shown by scientists to reduce photosynthesis in plants.

Obviously, someone who gets that  CO2 is a “greenhouse gas” but who believes that human emissions of it are toxic to the things that grow in greenhouses can’t be said to comprehend much about the mechanisms of climate science.

Nevertheless, a decent fraction of the test takers from a general population sample turned out to have a very accurate impression of climate scientists’ current best understandings of the mechanisms and consequences of human-caused global warming.  Not so surprisingly, these were the respondents who scored the highest on a general science comprehension assessment.

Moreover, there was no meaningful correlation between these individuals’ scores and their political outlooks.  “Conservative Republicans” who displayed a high level of general science comprehension and “liberal Democrats” who did both scored highly on the climate assessment test.

Nevertheless, those who displayed the highest scores on the test were not more likely to say they “believed in” human-caused global warming those who scored the lowest. On the contrary, those displayed the greatest comprehension of science’s best prevailing understandings of climate change were the most politically polarized on whether human activity is causing global temperatures to rise.

In other words, what ordinary members of the public “believe” about climate change, like what they “believe” about evolution, doesn’t reflect what they know; it expresses who they are.

The reason our society is politically divided on climate change, then, isn’t that citizens have different understandings of what climate scientists think.  It is that our political discourse, like the typical public opinion poll survey, frames the “climate change question” in a manner that forces them to choose between expressing who they are, culturally speaking, and revealing and acting on what they know about what is known.

This is changing, at least in some parts of the country.  Despite being as polarized as the rest of the country, for example, the residents of Southeast Florida have, through a four-county compact, converged on a comprehensive “Climate Action Plan,” consisting of 100 distinct adaptation and mitigation measures.

People in Florida know a lot about climate.  They’ve had to know a lot, and for a long time, in order to thrive in their environment.

Like the good high-school teachers who have figured out how to create a classroom environment in which curious and reflective students don’t have to choose between knowing what’s known about the natural history of humans and being who they are,  the local leaders who oversee the Southeast Florida Climate Compact have figured out how to create a political environment in which free and reasoning citizens aren’t forced to choose between using what they know and being who they are as members of culturally diverse communities.

Now what about museums?  How should they communicate climate science?

Well, I’ve told you all I know about climate science communication: that what ordinary members of the public “believe” about climate change doesn’t reflect what they know; it expresses who they are.

I’ve shown you, too, some models how of science-communication professionals in education and in politics have used evidence-based practice to disentangle facts from the antagonistic cultural meanings that inhibit free and reasoning citizens from converging on what is collectively known.

I think that’s what you have to do, too.

Using your professional expertise, you have already made museums a place where curious, reflective people of diverse outlooks go to satisfy their appetite to experience the delight and awe of apprehending what we have come to know by employing science’s signature methods of discovery.  

You now need to assure that the museum remains a place, despite the polluted state of our science communication environment generally, where those same people can go to satisfy their appetite to participate in what science has taught us and is continuing to teach us about the workings of our climate and the impact of human activity upon it.

You need, in short, to be sure that nothing prevents them from recognizing that the museum is a place where they don’t have to choose between enjoying that experience and being who they are.

How can you do that?

I don’t know.  Because I am not an expert in the design of science museum exhibits.

But you are—and I am confident that if you draw on your professional judgment and experience, enriched with empirical evidence aimed at testing and refining your own hypotheses, you will be able to tell me.  

 I have a strong hunch, too, that what you will have to say will be something other science-communication professionals will be able to use to promote public engagement with climate science in their domains, too.

 

Sunday
Sep072014

Weekend update: Another helping of evidence on what "believers" & "disbelievers" do & don't "know" about climate science

Data collected in ongoing work to probe, refine, extend, make sense of, demolish the "ordinary climate science intelligence" assessment featured in The Measurement Problem paper.

You tell me what it means ...

Saturday
Sep062014

Weekend update: Some research on climate literacy to check out

I have a bunch of critical administrative tasks that are due/overdue.  Fortunately, I discovered this special "climate literacy" issue of the Journal of Geoscience Education.  It'll make for a weekend's worth of great reading.

Thinking that others might be in need of the same benefit, I decided to post notice of the issue forthwith.

Reader reports on one or another of the articles are certainly welcome.

Friday
Sep052014

Teaching how to teach Bayes's Theorem (& covariance recognition) -- in less than 2 blog posts!

Adam Molnar, in front of graphic heuristic he developed to teach (delighted) elementary school children how to solve Riemann hypothesisThe 14.7 billion regular readers of this blog know that one of my surefire tricks for securing genuine edification for them is for me to hold myself forward as actually knowing something of importance in order to lure/provoke an actual expert into intervening to set the record straight.  It worked again!  After reading my post Conditional probability is hard -- but teaching it *shouldn't* be!, Adam Molnar, a statistician and former college stats instructor who is currently completing his doctoral studies in mathematics education at the University of Georgia, was moved to compose this great guide on teaching conditional probability & covariance detection. Score!

 

Conditional Probability: The Teaching Challenge 

Adam Molnar

A few days ago, Dan wrote a post presenting the results on how members of a 2000-person general population sample did on two problems, named BAYES and COVARY.

Dan posed the following questions: 

  1. "Which"--COVARY or BAYES--"is more difficult?"
  2. "Which is easier to teach someone to do correctly?" and
  3. "How can it be that only 3% of a sample as well educated and intelligent as the one [he] tested"--over half had a college or post graduate dagree--"can do a conditional probability problem as simple as" he understood BAYES to be. "Doesn't that mean," he asked "that too many math teachers are failing to use the empirical knowledge that has been developed by great education researchers & teachers?"

Check out this cool poster summary of Molnar study resultsAs it turns out, these are questions that figure in my own research on effective math instruction. As part of my dissertation, I conducted interviews of 25 US high school math teachers. In the interviews, I included versions of both COVARY and BAYES. My version of COVARY described a different hypothetical experient but used the same numbers as Dan's, while BAYES had slightly different numbers (I used the version from Bar-Hillel 1980).

So with this background, I'll offer my responses to Dan's questions.

Which is more difficult?

According to actual results, Bayes by far.

Dan reports that 55% of the people in his  sample got COVARY correct, compared to 3% for BAYES.

Other studies have shown a similar gap.

In one Dan and some collaborators conducted, 41% of a nationally diverse sample gave the correct response to a similarly constructed covariance problem. Eighty percent of the members of my math teacher sample computed the correct response.

In contrast, on conditional-probability problems similar to BAYES, samples rarely reach double digits. I got 1 correct response out of 25--4%--in my math-teacher sample. Bar-Hillel (1980) asked Israeli students on the college entrance exam and had 6% correct. Only 8% of doctors got a similar problem right (Gigerenzer, 2002).

Teaching Covary

Solving COVARY, like many problems, involves three critical steps.

Step 1 is reading comprehension.

As worded, COVARY is not a long problem, but it includes a few moderately hard words like "experiment" and "effectiveness." These phrases may not challenge the "14.6 billion" readers of this blog, but they can challenge English language learners or students with limited reading skills. Even for people who know all the words, one might misread the problem.

Step 2 is recognition. In this problem, a solver needs to compare probabilities or ratios by knowing "more likely to survive" leads to likelihood, and that likelihood involves computation, not just comparing counts. Comparing counts across a row (223 against 75) or a column (223 against 107) will lead to the wrong answer.

Taking this step involves recognizing a term, "more likely to survive". Learning the term requires work, but the US education system includes this type of problem. In the Common Core adopted by most states, standard 8.SP.A.4 states "Construct and interpret a two-way table summarizing data on two categorical variables collected from the same subjects. Use relative frequencies calculated for rows or columns to describe possible association between the two variables." High school standard HSS.CP.A.4 repeats the tables and adds independence. Although students may not study under the Common Core, and adults had older curricula, almost everyone has seen 2 by 2 tables. Therefore, teaching the term "more likely to survive" is not a big step.

Step 3 is computation.

Dan suggested likelihood ratios, but almost all teachers will work with probabilities (relative frequencies) as mentioned in the standard. Problem solvers need to create two numbers and compare them. The basic "classical" way to create a probability is successes over total. The classical definition works as long as solvers remember to use row totals (298 and 128), not the grand total of 426. People will make errors, but as mentioned previously, US people have some familiarity with 2 by 2 tables. Instruction is required, but the steps do not include any brand new techniques.

Of the five errors in my sample, one came from misreading (Step 1), one came from recognition (Step 2) comparing 223 against 107, and three came from computation (Step 3) using the grand total of 426 as the denominator instead of 298 and 126.

Teaching Bayes

For BAYES, a conditional-probability problem, reading comprehension (Step 1) is more difficult than for COVARY. COVARY provides a table, while BAYES has only text. Errors will occur when transferring numbers from the sentences in the problem. Even very smart people make occasional transfer errors.

The best-performing teacher in my interviews made only one mistake--a transfer, choosing the wrong number from earlier in a problem despite verbally telling me the correct process.

As an educator, I would like to try a version of COVARY where the numbers appeared in text without the table, and see how often people correctly built tables or other problem solving structures.

Step 2, recognition, is easier. The problem explicitly asks for "chance (or likelihood)" which means probability to most people. Additionally, all numbers in the problem are expressed as percentages. These suggestions lead most people to offer some percentage or decimal number between 0 and 1. All the teachers in my study gave a number in that range.

Step 3, computation, is much, much harder.

As demonstrated in the recent sample and other research work including Bar-Hillel (1980), many people will just select a number from the problem, either the rate of correct identification or the base rate. Both values are between 0 and 1, inside the range of valid probability values, thus not triggering definitional discomfort. Neither value is correct, of course, but I am not surprised by these results. A correct solution path generally requires training.

Interestingly, the set of possible solution paths is much larger in Bayes. Covary had probabilities and ratios; Bayes has at least eight approaches. Some options might be familiar to US adults, but none are computationally well known. In the list below, I describe each technique, comment on level of familiarity, and mention computational difficulty.

  • Venn Diagrams: A majority of adults could recognize a Venn diagram, because they are useful in logic and set theory. Mathematicians like them. Although Venn diagrams are not specified in the Common Core, they have appeared in many past math classes and I suspect they will remain in schools. I do not believe a majority of adults could correctly compute probabilities with a Venn diagram, however. Doing so requires knowing conditional probability and multiplicative independence rules, plus properly accounting for the overlapping And event. Knowing how to solve the Bayes problem with a Venn diagram almost always means one knows enough to use at least one other technique on this list, such as probability tables or Bayes Theorem. Those techniques are more direct and often simpler.
  • Bayes's Theorem: (which has several different names, including formula, law, and rule; Bayes might end with 's or ' or no apostrophe at all). If you took college probability or a mathy statistics course, you likely saw this approach. When I asked statisticians in the UGA statistics education research group to work this problem, they generally used Bayes' rule. This is not a good teaching technique, however, because the computation is challenging. It requires solid knowledge of conditional probability and remembering a moderately difficult formula. Other approaches are less demanding. 
  • Bayesian updating: A more descriptive name for the approach Dan wrote about, where Posterior odds = prior odds x likelihood ratio. This is even more rare than the formula version of Bayes rule; I first saw this in my masters program. Updating is easier computationally than the formula, but I would not expect untrained people to discover it independently. 
  • Probability-based tables: Many teachers attempted this method, with some reaching a usable representation (but none correctly selecting numbers from the table.) This method requires setting up table columns and rows, and then using independence to multiply probabilities and fill entries. After that, the solver needs to combine values from two boxes (True Blue and False Blue) to find the total chance that Wally perceived a blue bus, and then find the true blue probability by dividing True Blue / (True Blue + False Blue). Computation requires table manipulation, understanding independence, and knowing which numbers to divide. Choosing the correct boxes stumped the teachers most often. They tended to just answer the value of True Blue, 9% in this version.

    This approach was popular because it involves tables and probabilities, ideas teachers and students have seen. Independence is also included in the Common Core. Thus, it's not too far a stretch. The problem is difficulty, between building the table using multiplicative probability and then combining boxes in a specific way. Other approaches are easier. 
  • Probability-based trees: The excellent British mathematics teaching site NRICH has an introduction. AP Statistics students frequently learn tree diagrams. Some teachers used them, including the one teacher who got the explanation completely correct. Several other teachers made the same mistake as with probability tables; they built the representation, but only gave the True Blue probability and neglected the False Blue possibility. 

    Although trees are mentioned briefly in the Common Core as one part of one Grade 7 standard, I don't expect trees to become a popular solution. Because they were uncommon in the past, few (but not zero) non-teacher adults would attempt this approach. 
  • Grid representations: Dan cited a 2011 paper by Spiegelhalter, Pearson, and Short, but the idea is older. A reference at Illuminations, the NCTM's US website for math teaching resources, included a 1994 citation. The idea is to physically color boxes represented possibilities, which allows one to find the answer by counting boxes. At Georgia, we've successfully taught grid shading in our class for prospective math teachers. It works well and it's not very difficult. One study showed that 75% of pictorial users found the correct response (Cosmides & Tooby, 1996) Unfortunately, it's never been part of any standards I know. It also requires numbers expressible out of 100, which works in this problem but not in all cases. 
  • Frequency-based tables: In the 1990s, psychological researchers started publishing about a major realization: Frequency counts are more understandable than probabilities. Classic papers include Gigerenzer (1991) and Cosmides & Tooby (1996). The basic idea is to convert probabilities to frequencies by starting with a large grand total, like 1000 or 100,000, and then multiply probabilities to find counts. The larger starting point makes it likely that all computations result in integers, one problem in grid representation. 

    After scaling, the solver can form a table. In this problem, getting from the table to the correct answer still requires work, as one must know to divide True Blue / (True Blue + False Blue) as in the probability-based table. I know one college textbook with a "hypothetical hundred thousand table", Mind on Statistics by Utts and Heckard, which has included the idea since at least 2003. There are many college statistics textbooks, though, and frequency-based tables do not appear in US school standards. They are not commonly known. 
  • Frequency-based trees: Because tables don't make it obvious which boxes to select, a tree-based approach can combine the natural intuition of counts and the visual representation of trees. This increases teaching time because students are less familiar with trees. In exchange, the problem becomes easier to solve. This might be the most effective approach to teach, but it's very new. Great Britain has included frequency trees and tables in the 2015 version of GCSE probability standards for all Year 10 and 11 students, but they have not appeared in schools on this side of the pond.

The Teaching Challenge

Neither COVARY nor BAYES is easy, because both require expertise beyond what was previously taught in K-12 schools.

In the current US system, looking at Common Core and other standards, COVARY will be easier to teach. COVARY requires less additional information because it can extend easily from two ideas already taught, count tables and classical relative frequency probability. It fits very well inside the Common Core standards on conditional probability.

BAYES has lots of possible approaches. Some, like grid representations and frequency trees, are less challenging than COVARY. But they are relatively new in academic terms. Many were developed outside the US and none extend easily from current US standards. I'm not even sure the sort of conditional-probability problem reflected in BAYES should be considered under Common Core (unlike the new British GCSE standards), even though I believe decision making under conditional uncertainty is a vital quantitative literacy topic. Most teachers and I believe it falls under AP Statistics.

Furthermore, educational changes take a lot of time. Hypothetically (lawyers like hypotheticals, right?), let's say that today we implement a national requirement for conditional probability. States would have to add it to their standards documents. Testing companies would need to write questions. Textbook publishers would have to create new materials. Schools would have to procure the new materials. Math teachers would need training; they're smart enough to handle the problems but don't yet have the experience.

The UK published new guidelines in November 2013 for teaching in September 2015 and exams in June 2017. In the US? 2020 would be a reasonable target.

Right now, Bayes-style conditional probability is unfamiliar to almost all adults.

In Dan's sample, over half had a college degree. That's nice, but that doesn't imply much about conditional probability.

The CBMS reports on college mathematics and statistics. A majority of college grads never take statistics. In 2010, there were about 500,000 enrollments in college statistics classes, plus around 100,000 AP Statistics test takers, but there were about 15,000,000 college students. (For comparison, there were 3,900,000 mathematics course enrollments.) Of the minority that take any statistics, most people take only one semester. Conditional probability is not a substantial part of most introductory courses; perhaps there would be 30 minutes on Bayes' rule.

Putting this together, less than 10% of 2010 college students covered conditional probability. Past numbers would not be higher, since probability and statistics have recently gained in popularity.

I think it's fair to say that less than 5% of the US adult population has ever covered the topic--making that 3% correct response rate sound logical.

In an earlier blog post, Dan wrote "If you don't get Bayes, it's not your fault. It's the fault of whoever was using it to communicate an idea to you." Yes, there are better and worse ways to solve Bayes-style problems. Teachers can and should use more effective approaches. That's what I research and try to help implement. But for the US adult population, the problem is not poor communication; rather, it's never been communicated at all.

References

Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44, 211Ð233.

Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all?: Rethinking some conclusions of the literature on judgment under uncertainty. Cognition, 58, 1-73.

Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond "heuristics and biases". In W. Stroebe & M. Hewstone (Eds.), European Review of Social Psychology (Vol. 2, pp. 83-115). Chichester: Wiley.

Gigerenzer, G. (2002). Calculated risks: How to know when numbers deceive you. New York: Simon & Schuster.

Spiegelhalter, D., Pearson, M., and Short, I. (2011). Visualizing Uncertainty About the Future. Science 333, 1393-1400.

Utts, J., & Heckard, R. (2012). Mind on Statistics, 4th edition. Independence, KY: Cengage Learning.

 

Thursday
Sep042014

Political psychology according to Krugman: A degenerative research programme if ever I saw one ... 

As I said, I no longer watch the show "Paul Krugman's Magic Motivated Reasoning Mirror" but do pay attention when a reflective person who still does tells me that I've missed something important.  Stats legend Andrew Gelman is definitely in that category.  He thinks the latest episode of KMMRM can't readily be "dismissed."  

So I've taken a close look.  And I just disagree.

My reasons can be efficiently conveyed by this simple reconstruction of the tortured path of illogic down which the show has led its viewers:

Krugman:  A ha! Social scientists have just discovered something I knew all along: on empirical policy issues, people fit the evidence to their political predispositions.  It’s blindingly obvious that this is why conservatives disagree with me!  And by the way, I’ve made another important related discovery about mass public opinion: the tribalist disposition of conservatives explains why they are less likely to believe in evolution.

Klein: Actually, empirical evidence shows that the tendency to fit the evidence to one's political predispositions is ubiquitous—symmetric, even: people with left-leaning proclivities do it just as readily as people with right-leaning ones.  Indeed, the more proficient people are at the sort of reasoning required to make sense of empirical evidence, the more pronounced this awful tendency is.  Therefore, people who agree with you are as likely to be displaying this pernicious tendency--motivated reasoning--as those who disagree.  This is very dispiriting, I have to say.

EmpiricistHe’s right.  And by the way, your claims about political outlooks and “belief in” evolution are also inconsistent with actual data.

Krugman: Well, that’s all very interesting, but your empirical evidence doesn’t ring true to my lived experience; therefore it is not true. Republicans are obviously more spectacularly wrong.  Just look around you, for crying out loud.

Klein:  Hey, I see it, too, now that you point it out! Republicans are more spectacularly wrong than Democrats!  We’ve been told by empiricists that individual Republicans and individual Democrats reason in the same way.  Therefore, it must be that the collective entity “Republican Party” is more prone to defective reasoning than the collective entity “Democrat.”

Methodological individualist: Look: If you believe Republicans/conservatives don’t reason as well as Democrat/liberals, then there’s only one way to test that claim: to examine how the individuals who say “I’m a ‘liberal’ ” and the ones who say “I’m a ‘conservative’ ” actually reason.  If the evidence says “the same,” then invoking collective entities who exist independently of the individuals they comprise and who have their own “reasoning capacity” is to jump out of the empirical frying pan and into the pseudoscience fire.  I’m not going with you.

Krugman: What I said—and have clearly been saying all along—is that the incidence of delusional reasoning is higher among conservative elites than among liberal elites. I never said anything about mass political opinion!  Your misunderstanding of what I clearly said multiple times is proof of what I said at the outset: the reason non-liberals (conservatives, centrists, et al.) all disagree with me is that they are suffering from motivated reasoning.

Bored observer: What is the point of talking with you?  If you make a claim that is shown to be empirically false, you just advance a new claim for which you have no evidence.  It’s obvious that no matter what the evidence says, you will continue to say that the reason anyone disagrees with you is that they are stupid and biased.  I’m turning the channel.

Gelman: Hold on!  He’s now advanced an empirical claim for which "data are not directly available."  Because it therefore cannot be evaluated, his claim can't simply be dismissed!

Two people Gelman knows know their shit:  Yes it can.  When people react to contrary empirical evidence by resorting to the metaphysics of supra-individual entities or by invoking new, auxiliary hypotheses that themselves defy empirical testing, they are doing pseudoscience, not genuine empiricism.  The path they are on is a dead end.