Beth Garrett, President of Cornell University, died last week.
Being President of Cornell, a great university with a passionate sense of curiosity as boundless as hers, was the latest in the string of amazing things that she did in her professional life.
I met Beth when I started my clerkship for Justice Thurgood Marshall. She was ending hers, and for a couple of weeks of overlap she helped me to try to make sense of what the job would entail.
For sure she imparted some useful "how to's."
But the most important thing she conveyed was her attitude: her happy determination to figure out whatever novel, complex thing had to be understood to do the job right; her unself-conscious confidence that she could; and her excitement over the opportunity to do so.
The lesson continued when we were "baby professors" starting out at the University of Chicago Law School. Those same virtues -- the resolve to figure out whatever it was she didn't already know but needed to in order to make sense of something that perplexed her; the same confidence that she could learn whatever she had to to do that; and the same pleasure at the prospect of undertaking such a task -- characterized her style as a scholar.
These same atttributes contributed, of course, to her success in mastering the new challenges she took on thereafter in her career as a university administrator, first as Provost at the University of Southern California and then as President of Cornell.
But those opportunities also came her way because of all the other excellent qualities of character she possessed. Among these was her incisive apprehension of how scholarly communities could become the very best versions of themselves, and her capacity to inspire their members to reciprocate the efforts she tirelessly (but always happily, cheerfully!) made to helping them realize that aspiration.
Every person who was fortunate enough to have had some connection to Beth must now endure a disorienting sense of sadness and shock, bewilderment and resentment, at her premature death.
But after the grief retires to its proper place in the registry of their emotional-life experiences, every one of those persons will enjoy for the rest of their lives the benefit of being able to summon the inspiring and joyful example of Beth Garrett and using their memories of her to help guide and motivate them to be the best versions of themselves.
Weekend update: Another lesson from SE Fla Climate Political Science, this one on "banning 'climate change' " from political discourse
From something I'm working on ...
The most important, and most surprising, insight we have gleaned from studying climate science communication in Southeast Florida is that there is not one “climate change” in that region but two.
The first is the “climate change” in relation to which ordinary citizens “take positions” to define and express their identities as members of opposing cultural groups (ones that largely mirror national ones but that have certain distinctive local qualities) who harbor deep-seated disagreements about the nature of the best way to live. The other “climate change” is the one that everyone in Southeast Florida, regardless of their cultural outlook, has to live with--the one that they all understand and accept poses major challenges, the surmounting of which will depend on effective collective action (Kahan 2015a).
Each “climate change” has its own conversation.
For the first, the question under discussion is “who are you, whose side are you on?” For the second, it is “what do we know, what do we do?”
In Southeast Florida, at least, the only “climate change” discussion that has been “banned” from political discourse is the first one. Silencing this polarizing style of engagement is exactly what has made it possible for the region’s politically diverse citizens to engage in the second, unifying discussion of climate change aimed at exploiting what science knows about how to protect their common interests.
This development in the region’s political culture (one that is by no means complete or irreversible) didn’t occur by accident. It was accomplished through inspired, persistent, informed leadership . . . .
"Monetary preference falsification": a thought experiment to test the validity of adding monetary incentives to politically motivated reasoning experiments
From something or other and basically an amplification of point from Kahan (in press)
1. Monetary preference falsification
Imagine I am solicited and agree to participate in an experiment by researchers associated with the “Moon Walk Hoax Society,” which is dedicated to “exposing the massive fraud perpetrated by the U.S. government, in complicity with the United Nations, in disseminating the misimpression that human beings visited the Moon in 1969 or at any point thereafter.” These researchers present me with a "study" containing what I’m sure are bogus empirical data suggesting that a rocket the size of Apollo 11 could not have contained a sufficient amount of fuel to propel a spacecraft to the surface of the moon.
After I read the study, I am instructed that I will be asked questions about the inferences supported by the evidence I just examined and will be offered a monetary reward (one that I would actually find meaningful; I am not an M Turk worker, so it would have to be more than $0.10, but as a poor university professor, $1.50 might suffice) for “correct answers.” The questions all amount to whether the evidence presented supports the conclusion that the 1969 Moon-landing never happened.
Because I strongly suspect that the researchers believe that that is the “correct” answer, and because they’ve offered to pay me if I claim to agree, I indicate that the evidence—particularly the calculations that show a rocket loaded with as much fuel as would fit on the Apollo 11 could never have made it to the Moon—isvery persuasive proof that the 1969 Moon landing for sure didn't really occur.
If a large majority of the other experiment subjects respond the way I do, can we infer from the experiment that all the "unincentivized" responses that pollsters have collected on the belief that humans visited the Moon in 1969 are survey “artifacts,” & that the appearance of widespread public acceptance of this “fact” is “illusory” (Bullock, Gerber, Hill & Huber 2015)?
As any card-carrying member of the “Chicago School of Behavioral Economics, Incentive-Compatible Design Division” will tell you the answer is, "Hell no, you can't!"
Under these circumstances, we should anticipate that a great many subjects who didn’t find the presented evidence convincing will have said they did in order to earn money by supplying the response they anticipated the experimenters would pay them for.
Imagine further that the researchers offered the subjects the opportunity, after they completed the portion of the experiment for which they were offered incentives for “correct” answers, to indicate whether they found the evidence “credible.” Told that at this point there would be no “reward” for a “correct” answer or penalty for an incorrect one, the vast majority of the very subjects who said they thought the evidence proved that the moon landing was faked now reveal that they thought the study was a sham (Khanna & Sood 2016).
Obviously, it would be much more plausible to treat that "nonincentivized" answer as the one that finally revealed what all the respondents truly believed.
By their own logic, researchers who argue that monetary incentives can be used to test the validity of experiments on politically motivated reasoning invite exactly this response to their studies. These researchers might not have expectations as transparent or silly as those of the investigators who designed the "Moon walk hoax" public opinion study. But they are furnishing their subjects with exactly the same incentive: to make their best guess about what the experimenter will deem to be a "correct" response--not to reveal their own "true beliefs" about politically contested facts.
Studies as interesting as Khanna and Sood (2016) can substantially enrich scholarly inquiry. But seeing how requires looking past the patently unpersuasive claim that "incentive compatible methods" are suited for testing the external validity of politically motivated reasoning experiments (Bullock, Gerber, Hill & Huber 2015).
Kahan, D.M. The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences, (in press).
Khanna, Kabir & Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (2016), available at http://www.gsood.com/research/papers/partisanlearning.pdf.
WSMD? JA! Are science-curious people just *too politically moderate* to polarize as they get better at comprehending science?
This is approximately the 9,616th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
Observed in data from the CCP/Annenberg Public Policy Center Science of Science Filmmaking Initiative, the property of science in curiosity that has aroused so much curiosity among this site’s 14 billion regular subscribers (plus countless others) was its defiance of the “second law” of the science of science communication: motivated system 2 reasoning—also known by its catchy acronym, MS2R!
MS2R refers to the tendency of identity-protective reasoning—and as a result, cultural polarization—to grow in intensity in lock step with proficiency in the reasoning dispositions necessary to understand science. It is a pattern that has shown up time and again in the study of how people assess evidence relating to societally contested risks.
But as I showcased in the original post and reviewed "yesterday," science curiosity (measured with “SCS_1.0”) seems to break the mold: rather than amplify opposing states of belief, science curiosity exerts a uniform directional influence on perceptions of human-caused climate change and other putative risk sources in all people, regardless of their political orientations or level of science comprehension.
An intriguing, and appealing, surmise is that the appetite to learn new and surprising facts neutralizes the defensive information-processing style that identity-protective cognition comprises.
But this is really just a conjecture, one that is in desperate need of further study.
Such study, moreover, will be abetted, not thwarted, by the articulation of plausible alternative hypotheses. The best empirical studies are designed so that no matter what result they generate we’ll have more reason than we did before to credit one hypothesis relative to one or more rival ones.
In this spirit, I solicited commentators to suggest some plausible alternative explanations for the observed quality of science curiosity.
I talked about one of those "yesterday": the possibility that science curiosity might exert an apparent moderating effect only because in fact those high in science curiosity aren’t uniformly proficient enough in science comprehension to bend evidence in the direction necessary to fit positions congenial to their identities.
As I explained, I don’t think that’s true: again, the evidence in the existing dataset, which was assembled in Study 1 of the CCP/APPC “science of science filmmaking initiative,” seems to show that science curiosity moderates science comprehension’s magnification of political polarization even in those subjects who score highest in an assessment (the Ordinary Science Intelligence scale) of that particular reasoning proficiency.
But that’s just a provisional assessment, of course.
Today I take up another explanation, viz., that “science-curious” individuals might be more politically moderate than science-incurious ones.
Based on how science curiosity affected views on climate change, @AaronMatch raised the possibility that “scientifically-curious conservatives” might be “more moderate than their conservative peers.”
This would indeed be an explanation at odds with the conjecture that science curiosity stifles or counteracts identity-protective cognition.
If people who are high in science curiosity happen to be disposed to adopt more moderate political stances than less curious people of comparable self-reported political orientations, then obviously increased science curiosity will not drive citizens of opposing self-reported political orientations apart—but not because curiosity affects how they process information but because curiosity is simply an indicator of being less intensely partisan than one might otherwise appear.
Do the data fit this surmise?
Arguably, @Aaron’s view reflects an overly “climate change centric” view of the data. Neither highly science-curious conservatives nor highly science-curious liberals seem “more moderate” than their less curious counterparts on the risks of handgun possession or unlawful entry of immigrants into the US, for example. In addition, if “moderation” for conservatives is defined as “tending toward the liberal point of view,” then higher science comprehension predicts that more strongly than higher science curiosity on the risks of legalizing marijuana and of pornography. . . .
But to really do justice to the “science-curious folks are more moderate” hypothesis, I think we’d have to see how science curiosity relates to various policy positions on which partisans tend to disagree. Then we could see if science-curious individuals do indeed adopt less extreme stances on those issues than do individuals who have the same score on “Left_right,” the scale that combines self-reported liberal-conservative ideology and political-party identification, but lower scores on SCS.
There weren’t any policy-position items in our “science of science documentary filmmaking” Study No. 1 . . . .
But of course we did collect cultural worldview data!
These can be used to do something pretty close to what I just described. The six-point “agree-disagree” CW items reflect values of fairly obvious political significance (e.g., “The government interferes far too much in our everyday lives”; “Our society would be better off if the distribution of wealth was more equal”). The “science curiosity = political moderation” thesis, then, should predict that relatively science curious individuals will be more “middling” in their cultural outlooks than individuals who are less science curious.
That doesn’t seem to be true, though.
These Figures plots separately for subjects above and below the mean on SCS, the science curiosity scale, the relationship of the study subjects’ scores on the cultural worldview scales in relation to their scores on “Left_right,” the composite measure formed by combining their responses to a five-point liberal-conservative ideology and a seven point party-identification item.
If relatively science-curious subjects were more politically “moderate” than relatively incurious subjects with equivalent self-reported left-right political orientations, then we’d expect the slope for the solid lines to be steeper than the dotted ones in these Figures. They aren’t. The slopes are basically the same.
Here are Figures that plot the probability that a subject with any a particular Left_right score will hold the cultural worldviews of an “egalitarian communitarian,” an “egalitarian individualist,” a “hierarchical communitarian,” or a “hierarchical individualist” – first for the sample overall, and then for subjects identified by their relative science curiosity.
The only noticeable difference between relatively curious and incurious subjects is how likely politically moderate ones are to be either “egalitarian individualists” or “hierarchical communitarians.”
I’m not sure what to make of this except to say that it isn’t what you’d expect to see if science-curious subjects were more politically moderate than science-incurious ones conditional on their political orientations. If that were so, then the differences in the probabilities of holding one or another combination of cultural outlooks would be concentrated at one or the other or both extremes, not the middle of, the Left_right political orientation scale.
To make this a bit more concrete, remember that the “cultural types” most polarized on climate change are egalitarian communitarians and hierarchical individualists.
Thus, in order for the “science curious => politically moderate” thesis to explain the observed effect of science curiosity in relation to partisan views on human-caused global warming, science-curious subjects located at the extremes of the Left_right measure would have to be less likely than science-incurious ones to be members of those cultural communities.
So I think based on the data on hand that it’s unlikely the impact of science curiosity in defying the law of MS2R is attributable to a correlation between that disposition and political moderation.
But as I said, the data on hand aren’t nearly as suited for testing that hypothesis as lots of other kinds would be. So for sure I’d keep this possibility in mind in designing future studies.
BTW, for purposes of highlighting science curiosity’s defiance of MS2R, I’ve been using Left_right as the latent-disposition measure that drives identity-protective cognition. But one can see the same thing if one uses cultural worldviews for that purpose.
Take a look:
Actually, these cultural worldview data make me want to say something—along the lines of something I said before once (or twice or five thousand times), but quite a while ago; before all but maybe 3 or 4 billion of the regular readers of this blog were even born!—about the relationship between left-right measures and the cutural cognition worldview scales.
And now that I think of it, it’s related to what I said the other day about alternative measures of the dispositions that drive identity-protective cognition. . . .
But fore sure, this is more than enough already for one blog post! I’ll have to come back to this “tomorrow.”
3.2 Operationalizing identity
Scholars have used diverse frameworks to measure the predispositions that inform politically motivated reasoning. Left-right political outlooks are the most common (e.g., Lodge & Taber 2013; Kahan 2013). “Cultural worldviews” are used in others studies (e.g., Bolsen, Druckman & Cook 2014; Druckman & Bolsen 2011; Kahan, Braman, Cohen, Gastil & Slovic 2010) that investigate “cultural cognition,” a theoretical operationalization of motivated reasoning directed at explaining conflict over societal risks (Kahan 2012).
The question whether politically motivated reasoning is “really” driven by “ideology” or “culture” or some other abstract basis of affinity is ill-posed. One might take the view that myriad commitments—including not only political and cultural outlooks but religiosity, race, gender, region of residence, among other things—figure in politically motivated reasoning on “certain occasions” or to “some extent.” But much better would be to recognize that none of these is the “true” source of the predispositions that inform politically motivated reasoning. Measures of “left-right” ideology, cultural worldviews, and the like are simply indicators of —imperfect, crude proxies for—a latent or unobserved shared disposition that orients information processing. Studies that use alternative predisposition constructs, then, are not testing alternative theories of “what” motivates politically motivated reasoning. They are simply employing alternative measures of whatever it is that does (Kahan, Peters et al. 2012).
The only reason there could be for preferring one scheme for operationalizing these predispositions over another is its explanatory, predictive, and prescriptive utility. One can try to explore this issue empirically, either by examining the psychometric properties of alternative latent-variable measures of motivating dispositions (Xue, Hine, Loi, Thorsteinsson, Phillips 2014) or simply by putting alternative ones to practical explanatory tests (Figure 4). But even these pragmatic criteria are unlikely to favor one predisposition measure across all contexts. The best test of whether a researcher is using the “right” construct is what she is able to do with it.
Bolsen, T., Druckman, J.N. & Cook, F.L. The influence of partisan motivated reasoning on public opinion. Polit Behav 36, 235-262 (2014).
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.
Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (ed. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424
Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).
Xue, W., Hine, D.W., Loi, N.M., Thorsteinsson, E.B. & Phillips, W.J. Cultural worldviews and environmental risk perceptions: A meta-analysis. Journal of Environmental Psychology 40, 249-258 (2014).
WSMD? JA! Do science-curious people just not *know* enough about science to be "good at" identity-protective cognition?
This is approximately the 4,386th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
So lots of curious commentators had questions about the data I previewed on the relationship between science curiosity, science comprehension, and political polarization. They posed really good questions that reflect opposing hypotheses about the dynamics that could have produced the intriguing patterns I showcased.
I don’t have the data (sadly, but also not sadly, since now I can figure out what to collect next time) that I’d really want to have to answer their questions, test their hypotheses. But I’ve got some stuff that’s relevant and might help to focus and inform the relevant conjectures.
I’ll start, though, by just briefly rehearsing what the cool observations were that triggered the reflective theorizing in the comment thread.
Here is the key graphic:
What it shows is that science comprehension (left panel for each pair) and science curiosity (right) have different impacts on the extent of partisan disagreement over contested societal risks.
Science comprehension (here measured with the "Ordinary Science Intelligence" assessment) magnifies polarization. This is not news; this sad feature of the class of societal risks that excite cultural division (that class is limited!) is something researchers have known about for a long time.
But science curiosity doesn’t have that effect. Obviously, the respondents who are most science-curious are not converging in a dramatic way. But the patterns observed here—that science curiosity basically moves diverse respondents in the same general direction in regard to their assessment of disputed risks—suggests that individuals who are high in that particular disposition are basically processing information in a similar way.
That’s pretty radical. Because pretty much all manner of reasoning proficiency related to science comprehension does seem to be associated with greater polarization—so to find one that isn’t is startling, intriguing, encouraging & for sure something that that cries out for explanation and further interrogation.
In the post, I speculated that science curiosity might be a cognitive antidote to politically motivated reasoning: in those who experience this appetite intensely, the anticipated pleasure of being surprised displaces the defensive style of information processing that people (especially those proficient in critical reasoning) employ to deflect assent to information that might challenge a belief integral to their identities as members of one or another cultural group.
But responding to my invitation, commentators helpfully offered some alternative explanations.
I think I can shed some light on a couple of those alternatives.
Not a dazzling amount of light but a flicker or two. Enough to make the outlines of this strange, intriguing thing slightly more definite than they were in the original post—but without making them nearly clear enough to extinguish the curiosity of anyone who might be impelled by the appetite for surprise to probe more deeply . . . .
Actually, there are two specific conjectures I want to consider:
1. @AndyWest: Is the impact of science curiosity in mitigating polarization reduced to individuals who are low in science comprehension?
2. @AaronMatch: Are “science-curious” individuals more politically moderate than science-noncurious ones?
I’ll take up @AndyWest’s query today & return to @AaronMatch’s “tomorrow.”
* * *
So: @AndyWest suggests, in effect, that the patterns observed in the data might have nothing really to do with the effect of science curiosity on information processing but only with the effects of greater science comprehension in stimulating polarization about climate change.
Those who know more about a particular domain of contested science, such as that surrounding climate change, use that knowledge (opportunistically) to protect their identities more aggressively and completely than those who know less. That’s why increased science comprehension is associated with greater polarization.
Because science curiosity (as I indicated) is only modestly correlated with science comprehension, we wouldn’t see magnified polarization as science curiosity alone increases. Indeed, for sure we wouldn’t see it in my graphics, which illustrated the respective impact of science comprehension and science curiosity controlling for the other (i.e., setting the predictor value for the other at its mean in the sample).
But the reason we’d not be seeing magnified polarization wouldn’t be that science curiosity stifles identity-protective cognition. It would be that it simply lacks the power to enhance identity-protective reasoning associated with elements of critical reasoning that make one genuinely more proficient in making sense (or anti-sense, if that’s what protecting one’s identity requires) of scientific data.
This is for sure a very pertinent, appropriate follow-up response to the post!
I gestured toward it my original post, actually, by saying that I had run some analyses that looked at the interaction of science comprehension and science curiosity. The aim of those analyses was to figure out if the effect of increasing science curiosity in arresting increased polarization is conditional on the level of subjects’ science comprehension. But I didn’t report those analyses.
Well, here they are:
What these loess (locally weighted regression) analyses suggest is that the impact of science curiosity is pretty much uniform at all levels of science comprehension as measured by the Ordinary Science Intelligence assessment.
There is obviously a big gap in “belief in human-caused climate change” among individuals who vary in science comprehension.
But whether someone is in the top 1/2 of or the bottom 1/2 of science comprehension-- indeed, whether someone is in the bottom decile or top decile of science comprehension-- greater science curiosity predicts a greater probability of agreeing that human beings are the principal cause of climate change, regardless of one's political outlooks.
We can discipline this visual inference by modeling the data:
This logistic regression confirms that there is no meaningful interaction between science curiosity (SCS) and science comprehension (OSI_i). The coefficients for the cross-product interaction terms for science curiosity and science comprehension (OSIxSCS ) and for science curiosity, science comprehension, and political outlooks (crxosixscs) are all trivially different from zero.
In other words, the impact of science curiosity in increasing the probability of belief in human-caused climate change (b = 0.31, z = 5.51) is pretty much uniform at every level of science comprehension regardless of political orientation.
Here’s a graphic representation of the regression output (one in which I’ve omitted the cross-product interaction terms, the inclusion of which would add noise but not change the inferential import of the analyses):
Again, science comprehension for sure magnifies polarization.
But at every level of science comprehension, science curiosity has the same impact (reflected in the slope of the plotted predicted probabilities): it promotes greater acceptance of human-caused climate change--among both "liberal Democrats" and "conservative Republicans."
So this is evidence, I think, that is inconsistent with @AndyWest’s surmise. It suggests the power of science curiosity--alone among science-reasoning proficiencies--to constrain magnification of polarization is not a consequence of the dearth of high science-comprehending individuals among the segment of the population that is most science curious.
On the contrary, the polarization-constraining effect of science curiosity extends to those even at the highest level of cience comprehension.
@AndyWest had suggested that an analysis like this be carried out among individuals highest in “OCSI”—the “Ordinary Science Comprehension Intelligence” assessment. This data set doesn’t have OCSI scores in it. But I do know that there is a pretty decent positive correlation between OSI and OCSI (particularly OSI and the new OCSI_2.0, to be unveiled soon!), so it seems pretty unlikely to me the results would be different if I had looked for an OCSI-SSC rather than an OSI-SSC interaction.
Still, I don’t think this “settles” anything really. We need more fine-grained data, as I’ve emphasized, throughout.
But this closer look at the data at hand does nothing to dispel the intriguing possibility that science curiosity might well be a disposition that negates identity-protective cognition.
More” tomorrow” on science curiosity and “political moderation.”
Incentives and politically motivated reasoning: we can learn something but only if we don't fall into the " 'external validity' trap"
From revision to "The Politically Motivated Reasoning Paradigm" paper. Been meaning to address the interesting new studies on how incentives affect this form of information processing. Here's my (provisional as always) take. It owes a lot to helpful exchanges w/ Gaurav Sood, who likely disagrees with everything I say; maybe I can entice/provoke him into doing a guest post! But in any case, his curiosity & disposition to acknowledge complexity equip him both to teach & learn from others regardless of how divergent his & their "priors."
6. Monetary incentives
Experiments that reflect the PMRP design are “no stake” studies: that is, subjects answer however they “feel” like answering; the cost of a “wrong” answer and the reward for a “correct “one are both zero. In an important development, several researchers have recently reported that offering monetary incentives can reduce or eliminate polarization in the answers that subjects of diverse political outlooks give to questions of partisan import (Khanna & Sood 2016; Prior, Sood & Gaurav 2015; Bullock, Gerber, Hill & Huber 2015).
The quality of these studies is uneven. The strongest, Khanna & Sood (2016), uses the PMRP design. K&S show that offering incentives reduces the tendency of high numeracy subjects to supply politically biased answers in interpreting covariance data in a gun-control experiment, a result reported in Kahan et al. (2013) and described in Section 4.
PSG and BGHH, in contrast, examine subject responses to factual quiz questions (e.g., “. . . has the level of inflation [under President Bush] increased, stayed the same, or decreased?”;“how old is John McCain?,” (Bullock et al. 2015, pp. 532-33)). Because this design does not involve information processing, it doesn’t show how incentives affect the signature feature of politically motivated reasoning: the opportunistic adjustment of the weight assigned to new evidence conditional on its political congeniality.
Both K&S and BGHH, moreover, use M Turk worker samples. Manifestly unsuited for the study of politically motivated reasoning generally (see Section 3.3), M Turk samples are even less appropriate for studies on the impact of incentives on this form of information processing. M Turk workers are distinguished from members of the general population by their willingness to perform various forms of internet labor for pennies per hour. They are also known to engage in deliberate misrepresentation of their identities and other characteristics to increase their on-line earnings (Chandler & Shapiro 2016). Thus, how readily they will alter their reported beliefs in anticipation of earning monetary rewards for guessing what researchers regard as “correct” answers furnishes an unreliable basis for inferring how members of the general public form beliefs outside the lab, with incentives or without them.
But assuming, as seems perfectly plausible, that studies of ordinary members of the public corroborate the compelling result reported in K&S, a genuinely interesting, and genuinely complex, question will be put: what inference should be drawn from the power of monetary incentives to counteract politically motivated reasoning?
BGHH assert that such a finding would call into doubt the external validity of politically motivated reasoning research. Attributing the polarized responses observed in “no stake” studies to the “expressive utility that [study respondents] gain from offering partisan-friendly survey responses,” BGHH conclude that the “apparent gulf in factual beliefs between members of different parties may be more illusory than real” (Bullock et al., pp. 520, 523).
One could argue, though, that BGHH have things exactly upside down. In the real world, ordinary members of the public don’t get monetary rewards for forming “correct” beliefs about politically contested factual issues. In their capacity, as voters, consumers, or participants in public discussion, they don’t earn even the paltry expected-value equivalent of the lottery prizes that BGHHG offered their M Turk worker subjects for getting the “right answer” to quiz questions. Right or wrong, an ordinary person’s beliefs are irrelevant in these real-world contexts, because any action she takes based on her beliefs will be too inconsequential to have any impact on policymaking.
The only material stake most ordinary people have in the content of their beliefs about policy-relevant facts is the contribution that holding them makes to the experience of being a particular sort of person. The deterrent effect of concealed-carry laws on violent crime, the contribution of human activity to global warming, the impact of minimum wage laws on unemployment—all of these are positons infused with social meanings. The beliefs a person forms about these “facts” reliably dispose her to act in ways that others will perceive to signify her identity-defining group commitments (Kahan in press_a). Failing to attend to information in a manner that generates such beliefs can have a very severe impact on her wellbeing—not because the beliefs she’d form otherwise would be factually wrong but because they would convey the wrong message about who she is and whose side she is on. The interest she has in cultivating beliefs that reliably summon an identity-expressive affective stance on such issues is what makes politically motivated reasoning rational.
No-stake PMRP designs seek to faithfully model this real-world behavior by furnishing subjects with cues that excite this affective orientation and related style of information processing. If one is trying to model the real-world behavior of ordinary people in their capacity as citizens, so-called “incentive compatible designs”—ones that offer monetary “incentives” for “correct” answers”—are externally invalid because they create a reason to form “correct” beliefs that is alien to subjects’ experience in the real-world domains of interest.
On this account, expressive beliefs are what are “real” in the psychology of democratic citizens (Kahan in press_a). The answers they give in response to monetary incentives are what should be regarded as “artifactual,” “illusory” (Bullock et al., pp. 520, 523) if we are trying to draw reliable inferences about their behavior in the political world.
It would be a gross mistake, however, to conclude that studies that add monetary incentives to PMRP designs (e.g., Khanna & Sood 2016) furnish no insight into the dynamics of human decisionmaking. People are not merely democratic citizens, not only members of particular affinity groups, but also many other things, including economic actors who try to make money, professionals who exercise domain-specific expert judgments, and parents who care about the health of their children. The style of identity-expressive information processing that protects their standing as members of important affinity groups might well be completely inimical to their interests in these domains, where being wrong about consequential facts would frustrate their goals.
Understanding how individuals negotiate this tension in the opposing “stakes” they have in forming accurate beliefs and identity-expressive ones is itself a project of considerable importance for decision science. The theory of “cognitive dualism” posits that rational decisionmaking comprises a capacity to employ multiple, domain-specific styles of information processing suited to the domain-specific goals that individuals have in using information (Kahan 2015b). Thus, a doctor who is a devout Muslim might process information on evolution in an identity-expressive manner “at home”—where “disbelieving” in it enables him to be a competent member of his cultural group—but in a truth-seeking manner “at work”—where accepting evolutionary science enables him to be a competent oncologist (Hameed & Everhart 2013). Or a farmer who is a “conservative” might engage in an affective style of information processing that evinces “climate skepticism” when doing so certifies his commitment to a cultural group identified with “disbelief” in climate change, but then turn around and, join the other members of that same cultural group in processing such information in a truth-seeking way that credits climate science insights essential to being a successful farmer (Rejesus et al. 2013).
If monetary incentives do meaningfully reverse identity-protective forms of information processing in studies that reflect the PMRP design, then a plausible inference would be that offering rewards for “correct answers” is a sufficient intervention to summon the truth-seeking information-processing style that (at least some) subjects use outside of domains that feature identity-expressive goals. In effect, the incentives transform subjects from identity-protectors to knowledge revealers (Kahan 2015a), and activate the corresponding shift in information-processing styles appropriate to those roles.
Whether this would be the best understanding of such results, and what the practical implications of such a conclusion would be, are also matters that merit further, sustained emirical inquiry. Such a program, however, is unlikely to advance knowledge much until scholars abandon the pretense that monetary incentives are the “gold standard” of experimental validity in decision science as opposed to simply another methodological device that can be used to test hypotheses about the interaction of diverse, domain-specific forms of information processing.
Chandler, J. & Shapiro, D. Conducting Clinical Research Using Crowdsourced Convenience Samples. Annual Review of Clinical Psychology (2016), advance on-line publication at http://www.annualreviews.org/doi/abs/10.1146/annurev-clinpsy-021815-093623.
Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).
The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences, (in press).
Khanna, Kabir & Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (working), available at http://www.gsood.com/research/papers/partisanlearning.pdf.
Prior, M., Sood, G. & Khanna, K. You Cannot be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions. Quarterly Journal of Political Science 10, 489-518 (2015).
Rejesus, R.M., Mutuc-Hensley, M., Mitchell, P.D., Coble, K.H. & Knight, T.O. US agricultural producer perceptions of climate change. Journal of agricultural and applied economics 45, 701-718 (2013).
Weekend update: Disentanglement Principle, A Lesson from SE Fla Climate Political Science Lecture (& slides)
A presentation I gave at a meeting of the Institute for Sustainable Communities, a major partner of the Southeast Florida Regional Climate Compact. Synthesizes research CCP (with generous support from the Skoll Global Threats Foundation) has done to support Compact science communication.
If the Compact members have learned from our research even 10^-2 of what they've taught us about what "climate change" means and what it takes to have the right conversation & banish the wrong conversation about it, then I'll feel we've done something pretty important. Even more important will be to help others learn the lessons of Southeast Florida Climate Political Science . . . .
Science curiosity and identity-protective cognition ... a glimpse at a possible (negative) relationship
So here is a curious phenomenon: unlike pretty much every other science-related reasoning disposition, science curiosity seems to avoid magnifying identity-protective cognition!
Let's start with a bunch of culturally contested societal risks, ones on which political polarization can be assessed with the ever-handy Industrial Strength Risk Perception Measure:
For each risk, the paired panels chart the risk-perception impact of greater science comprehension and greater science curiosity (in each case “controlling for” the influence of the other), respectively. They estimate those effects separately, moreover, for a "liberal Democrat" and for a "conservative Republican," designations determined by reference to the study subejects' scores on a composite political ideology and party-identification scale.
As science comprehension (measured with the Ordinary Science Intelligence assessment) increases, so too does the degree of polarization on politically contested risks involving climate change, gun possession, fracking, marijuana legalization, pornography, and the like.
That’s not a surprise. The warping effect of identity-protective cognition on cognitive reflection, numeracy, science comprehension and all other manner of critical reasoning proficiency has been exhaustively chronicled, and lamented, in this blog.
But that’s not what happens as science curiosity increases. On the contrary, in all cases, greater science curiosity has the same general risk-perception impact—in some cases enhancing concern, in some blunting it, and in others having no directional effect—for study respondents of politically diverse outlooks.
Science curiosity is being measured for these purposes with the CCP/Annenberg Public Policy Center “Science Curiosity Scale,” or SCS_1.0.
SCS_1.0 was developed for use in the CCP/APPC “Evidence-based Science Filmmaking Initiative.” Previous posts have discussed the development and properties of this measure, including its ability to predict engagement with science documentaries and other forms of science information among diverse groups.
So has its relationship to random other non-science related activities, such as taking a peek at what goes on at gun shows and even cracking open a book on religion now & again.
But this feature of SCS_1.0—its apparent ability to defy the gravitational pull of identity-protective cognition on perceptions of disputed risks—is something I didn’t anticipate. . . .
Indeed, I really don’t want to give the impression that I “get” this, it makes “perfect sense,” etc. Or even that there’s necessarily a “there” there.
An observation like this is just corroboration of the fundamental law of the conservation of perplexity, which refers to the inevitable tendency of valid empirical research to generate one new profound mystery (at least one!) for every mystery that it helps to make less perplexing (anyone who thinks “mysteries” are ever solved by empirical inquiry has a boring conception of “mystery” or, more likely, a misconception of how empirical research works).
But here are some thoughts:
1. It does in fact make sense to think of curiosity as the cognitive negation of motivated reasoning. The latter disposition consists in the unconscious impulse to conform evidence to beliefs that serve some goal (like cultural identity protection) unrelated to figuring out the truth about some uncertain factual matter. Curiosity, in contrast, is an appetite not only to know the truth but to be surprised by it: it consists in a sense of anticipated pleasure in being shown that the world works in a manner that is astonishingly different from what one had thought, and in being able to marvel at the process that made it possible for one to see that.
When one is in that state, the sentries of identity-protect are necessarily standing down. The path is clear for truth to march in and enlighten . . . .
2. At the same time, these data are pretty baffling to me. No way did I expect to see this.
The affinity between identity-protective cognition and critical reasoning, I’m convinced, reflects the role the former plays in the successful negotiation of social interactions. Where positions on disputed issues of risk become entangled in social meanings that transform them into badges of membership in and loyalty to opposing cultural groups, it is perfectly rational, at the individual level, for people to adopt styles of information processing that conduce to formation of beliefs that express their tribal allegiances.
Indeed, not to attend to information in this manner would put normal people—one’s whose personal beliefs about climate change or fracking or gun control don’t have any material impact on the risks they or anyone else face—in serious peril of ostracism and ridicule within their communities.
I’d essentially come to the bleak, depressing, spirit-enervating conclusion, then, that the only reasoning disposition likely to blunt the force of identity-protective cognition was a social disability in the nature of autism.
But now, for the 14 billionth time, I will have to rethink and reconsider.
Because clearly the appetite to seek out and consume information about the insights human beings have acquired through the use of science’s signature methods of disciplined observation and inference is no reasoning disability. And those who in who are most impelled to satisfy this appetite are clearly not using what they learn to forge even stronger links between their perceptions of how the world works and the views that express membership in their identity-defining affinity groups.
Or at least that’s one way to understand evidence like this. Pending more investigation.
3. All sorts of qualifications are in order.
a. For one thing, SCS_1.0 is a work in progress. Additional tests to refine and validate it are in the works.
b. For another, science comprehension and science curiosity are not wholly unrelated! Actually, they aren’t strongly related; in the data set from which these observations come, the correlation is about 0.3. But that's not zero!
I actually tested for “interactions” between science comprehension (as measured by OSI_2.0) and science curiosity (as measured by SCS_1.0), and between the two of them and political outlooks. The interactions were all pretty close to zero; they wouldn’t affect the basic picture I’ve shown you above (but I am happy to show more pictures—just tell me what you want to see).
Still I don’t think the effect of science curiosity on identity-protective cognition can be made sense of without closer, more fine-grained examination of how much it alters the trajectory of polarization at different levels of science comprehension.
c. Also, the impact of science curiosity is interesting only because it doesn’t magnify polarization. It doesn’t make it go away, as far as I can tell. That’s important—for the reasons stated. But a reasoning disposition that generated convergence among individuals of diverse cultural outlooks on culturally contested risks (as science comprehension does on culturally uncontested ones) would be much more remarkable—and important.
We should be looking for that. I’d say, though, that looking even harder at curiosity might help us detect if there is such a reasoning quality—the ”Ludwick factor” is the technical term for those who’ve speculated on its possible existence . . .—and how it might be disseminated and stimulated.
For surely, that is a reasoning disposition the cultivation of which should be cultivated in the citizens of the Liberal Republic of Science.
But in the meantime, this unexpected, intriguing relationship can be contemplated by curious people with excitement and perplexity and with a desire to figure out even more about it.
So what do you think?
From something I'm working on. Anyone of the 14 billion regular readers of this blog could fill in the rest. But if you are one of 1.3 billion people who on any given day visit this site for the first time, there's more on the "'Two climate changes' thesis" here & here, among other places. . . .
America’s two “climate changes”
There are two climate changes in America: the one people “believe” or “disbelieve” in in order to express their cultural identities; and the one people ("believers" & "disbelievers" alike) acquire and use scientific knowledge about in order to make decisions of consequence, individual and collective. I will present various forms of empirical evidence—including standardized science literacy tests, lab experiments, and real-world field studies in Southeast Florida—to support the “two climate changes” thesis. I will also examine what this position implies about the forms of deliberative engagement necessary to rid the science communication environment of the toxic effects of the first climate change and to make it habitable for enlightened democratic engagement with the second.
Do science curious evolution believers and science curious nonbelievers both like to go to the science museum? How about to gun shows?
I've described highlights from the first study (a more complete report on which can be downloaded here) in some earlier posts. They include the development of a behaviorally validated "science curiosity" scale (one that itself involves performand and behavioral measures and not just self-reported interest ones), and the successful use of that scale to predict "engagement" --measured behaviorally, and not just with self-reported interest--in the cool Tangled Bank Studios documentary on evolution, Your Inner Fish.
Stay tuned for more reports about our findings in this ongoing project.
But for now, consider these interesting findings about the power of "SCS_1.0," the science curiosity scale we constructed, to predict one or another types of behavior.
The graphic shows, not surprisingly, that those who are more science curious are way more likely to do things like read science books and attend science museums.
Probably not that surprisingly, they might be slightly more likely to do other things, too, like go to an amusement park-- or even a gun show than science uncurious people. But they really aren't much more likely to do those thngs than the average member of the population.
In addition to estimating the predicted mean probabilities for these activities conditional on science curiosity for the entire sample (a large nationally represenative one), I've also estimated the predicted mean probabilities for individuals who say they "do" and "don't believe in" human evolution:
One of the coolest things we found in ESFI Study No. 1 was that science curious individuals who "disbelieve in" evolution were just as engaged as science curious individuals who do believe in evolution. In addition, they were both substantially more engaged than their science-noncurious counterparts, most of whom yawned and turned the show off after a couple of minutes, no doubt hoping that the survey would resume its focus on Honey Boo Boo, "Inflate-gate," and other non-science related topics used to winnow out those less interested in science than in other interesting things.
Individuals who "disbelieve" in evolution but who were high in science curiosity also indicated that they found the information in the documentary clip valid and convincing as an account of the origins of human color vision.
Of course, that didn't "change their minds" on evolution. Their beliefs on that measure who they are—not what they know about science or what more they’d like to know about what human beings have discovered using science's signature methods of disciplined observation and inference. The experience of watching the cool Your Inner Fish clip satisfied their appetite to know what science knows but it didn't make them into different people!
Indeed, I think it likely succeeded in the former precisely because it didn't evince any interest in accomplishing the latter. It didn't put science curious people who have an identity associated with disbelief in evolution in the position of having to choose between being who they are and knowing what science knows.
Satisfying this criterion, which I've taken to calling the "disentanglement principle," is, I believe, a key element of successful science communication in pluralistic liberal society (Kahan 2015a, 2015b).
Anyway, check out what evolution believers & disbelievers do in their free time conditional on having the same level of science curiosity.
Many of the same things -- but not all!
I have ideas about what this means. But I'm out of time for today! So how about you tell me what you make of this?
Plata's Republic: Justice Scalia and the subversive normality of politically motivated reasoning . . . .
. . . Plata's Republic . . .
Civis: It is “fanciful,” you say, to think that three district court judges “relied solely on the credibility of the testifying expert witnesses” in finding that release of the prisoners would not harm the community?
Cognoscere Illiber: Yes, because “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.”
Civis: “Of course” judges with “different policy views” would have formed different beliefs about the consequences if they had evaluated the same expert evidence? Why? Surely the judges, like all nonspecialists, would agree that these are matters outside their personal experience. Are you saying the judges would ignore the experts and decide on partisan grounds?
Cognoscere Illiber: No. “I am not saying that the District Judges rendered their factual findings in bad faith. I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” on such matters. The “expert witnesses” here were of the sort trained to make “broad empirical predictions”—like whether “deficit spending will . . . lower the unemployment rate” or “the continued occupation of Iraq will decrease the risk of terrorism.”
Civis: But people normally assert that their policy positions on criminal justice, economic policy, and national security are based on empirical evidence. It almost sounds as if are you saying things are really the other way around—that what they understand the empirical evidence to show is “necessarily based in large part upon policy views.”
Cognoscere Illiber: Exactly what I am saying! Those sorts of “factual findings are policy judgments.” Thus, empirical evidence relating to the consequences of law should be directed to “legislators and executive officials”—not “the Third Branch”—since in a democracy it is the people’s “policy preferences,” not ours, that should be “dress[ed] [up] as factual findings.”
Civis: Ah. Thanks for telling me—I had been naively taking all the empirical arguments in politics at face value. Silly me! Now I see, too, that those naughty judges were just trying to exploit my gullibility about policy empiricism. Shame on them!
 Plata, 131 S. Ct. at 1954 (Scalia, J., dissenting).
 Id. at 1954-55.
 Id. at 1954.
 Id. at 1955.
* * *
Brown v. Plata was among the most consequential decisions of the 2010 Term—in multiple senses. In Plata, California attacked an order, issued by a three-judge federal district court, directing the state to release more than 40,000 inmates from its prisons. It was not disputed that California prisons had for over a decade been made to store double their intended capacity of 80,000 inmates. The stifling density of the population inside—“200 prisoners . . . liv[ing] in a gymnasium,” sleeping in shifts and “monitored by two or three guards”; “54 prisoners . . . shar[ing] a single toilet”; “50 sick inmates . . . held together in a 12- by 20-foot” cell; “suicidal inmates . . . held for prolonged periods in telephone-booth sized cages” ankle deep in their own wastes—was amply documented (with photographs, appended to the Court’s opinion, among other things). The awful effect on the prisoners’ mental and physical health was indisputable, too (“it is an uncontested fact that, on average, an inmate in one of California’s prisons needlessly dies every six to seven days”). These conditions, the district court concluded, violated the Eighth Amendment. The district court also saw that there was no prospect whatsoever that the state, having repeatedly rejected prison-expansion proposals and now in a budget crisis, would undertake the massive expenditures necessary to increase prison capacity and staffing. Accordingly, it ordered the only relief that, to it, seemed, possible: the release of the number of inmates that the court deemed sufficient to bring the prison’s into compliance with minimally acceptable constitutional standards.
The Supreme Court, in a five to four decision, affirmed. The major issue of contention between the majority and dissenting Justices was what consequence the ordered prisoner release would have on the public safety, a consideration to which the district court was obliged to give “substantial weight’” by the Prison Litigation Reform Act of 1995. The district court devoted 10 days of the 14-day trial to receiving evidence on this issue, and concluded that use of careful screening protocols would permit the state to release the necessary number of inmates “in a manner that preserves public safety and the operation of the criminal justice system.”
The determinations underlying this finding, Justice Kennedy noted in his majority opinion, “are difficult and sensitive, but they are factual questions and should be treated as such.” The district court had “rel[ied] on relevant and informed expert testimony” by criminologists and prison officials, who based their opinion on “empirical evidence and extensive experience in the field of prison administration.” Indeed, some of that evidence, Justice Kennedy observed, had “indicated that reducing overcrowding in California’s prisons could even improve public safety” by abating prison conditions associated with recidivism. Like its other findings of fact, the district court’s determination that the State could fashion a reasonably safe release plan was not “clearly erroneous.”
The idea that the district court’s public safely determination was a finding of “fact” entitled to deferential review caused Justice Scalia to suffer an uncharacteristic loss of composure. Deference is due factfinders because they make “determination[s] of past or present facts” based on evidence such as live eyewitness testimony, the quality of which they are “in a better position to evaluate” than are appellate judges confined to a “cold record,” he explained. The public-safety finding of the three-judge district court, in contrast, consisted of “broad empirical predictions necessarily based in large part upon policy views.” “The idea that the three District Judges in this case relied solely on the credibility of the testifying expert witnesses is fanciful,” Scalia thundered.
Justice Scalia’s reaction to the majority’s reasoning in Plata is reminiscent of Wechsler’s to the Court’s in Brown. Like Scalia, Wechsler had questioned whether the finding in question—that segregated schools “retard the educational and mental development” of African American children—could bear the decisional weight that the Court was putting on it. But whereas Wechsler had only implied that the Court was hiding its moral-judgment light under an empirical basket—“I find it hard to think the judgment really turned upon the facts [of the case]”—Scalia was unwilling to bury his policymaking accusation in a rhetorical question. “Of course they [the members of the three-judge district court] were relying largely on their own beliefs about penology and recidivism” when they found that release was consistent with—indeed, might even enhance—public safety, Scalia intoned. “And of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “[I]t is impossible for judges to make ‘factual findings’ without inserting their own policy judgments, when the factual findings are policy judgments.”
Justice Scalia’s dissent is also akin to the reaction to “empirical factfinding” in the Supreme Court’s abortion jurisprudence. Justice Blackmun’s majority opinion in Roe v. Wade cited “medical data” supplied by “various amici” to demonstrate that “[m]odern medical techniques” had dissolved the state’s historic interest in protecting women’s health. “[T]he now-established medical fact . . . that until the end of the first trimester mortality in abortion may be less than mortality in normal childbirth” supported recognition of an unqualified right to abortion in that period. Ely, among others, challenged the Court’s empirics: “This [the medical safety of abortions relative to childbirth] is not in fact agreed to by all doctors—the data are of course severely limited—and the Court's view of the matter is plainly not the only one that is ‘rational’ under the usual standards.” In any case, “it has become commonplace for a drug or food additive to be universally regarded as harmless on one day and to be condemned as perilous on the next”—so how could “present consensus” among medical experts plausibly ground a durable constitutional right?
It can’t. “[T]ime has overtaken some of Roe’s factual assumptions,” the Court noted in Planned Parenthood of Southeastern Pennsylvania v. Casey. “[A[dvances in maternal health care allow for abortions safe to the mother later in pregnancy than was true in 1973, and advances in neonatal care have advanced viability to a point somewhat earlier.” Accordingly, culturally fueled enactments of and challenges to abortion laws continue—repeatedly confronting the Justices with new empirical questions to which their answers are denounced as motivated by “personal values.” * * *
The only citizens who are likely to see the Court’s decision as more authoritative and legitimate when it resorts to empirical fact-finding in culturally charged cases are the ones whose cultural values are affirmed by the outcome. * * *
This factionalized environment incubates collective cynicism—about both the political neutrality of courts and about the motivations behind empirical arguments in policy discourse generally. Indeed, Justice Scalia’s extraordinary dissent in Plata synthesizes these two forms of skepticism.
It was “fanciful,” Scalia asserted, to think that the three district court judges “relied solely on the credibility of the testifying expert witnesses.” One might, at first glance, see him as merely rehearsing his standard diatribe against “judicial activism.” But this is actually a conclusion that Scalia deduces from premises—ones that don’t enter into his standard harangue—about the nature of empirical evidence and policymaking. The experts’ testimony, he explains, dealt with “broad empirical predictions”—ones akin to whether “deficit spending will . . . lower the unemployment rate,” or whether “the continued occupation of Iraq will decrease the risk of terrorism.” For Scalia, the beliefs one forms on the basis of that sort of evidence are “inevitably . . . based in large part upon policy views.” It follows that “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “I am not saying,” Justice Scalia stresses, “that the District Judges rendered their factual findings in bad faith.” “I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” when assessing empirical evidence relating to the consequences of governmental action. , when the factual findings are policy judgments.”
In effect, Scalia is telling us to wise up, not to be snookered by the Court. Sure, people claim that their “policy positions” on matters such as crime control, fiscal policy, and national security are based on empirical evidence. But we all know that things are in fact the other way around: what one makes of empirical evidence is “inevitably” and “necessarily based . . . upon policy views.” At one point, Scalia describes the district court judges as having “dress[ed]-up” their “policy judgments” as “factual findings.” But those judges weren’t, in his mind, doing anything different from what anyone “inevitably” does when making “broad empirical predictions”: those sorts of “factual findings are policy judgments.” Empirical evidence on the consequences of public policy should be directed to “legislators and executive officials” rather than “the Third Branch,” Scalia insists. The reason, though, isn’t that the former are better situated to draw reliable inferences from the best available data. On the contrary, it is that it is a conceit to think that reliable inferences can possibly be drawn from empirical evidence on policy consequences—and so “of course” it is the “policy preferences” of the majority, rather than those of unelected judges, that should control.
It is hard to say what is more extraordinary: the substance of Scalia’s position or the knowing tone with which he invites us to credit it. One might think it would be shocking to see a Justice of the Supreme Court so brazenly deny the intention (capacity even) of democratically accountable officials to make rational use of science to promote the common good. But Scalia could not expect his logic to persuade unless he anticipated that readers would readily concur (“of course”) that empirical arguments in policy debate are a kind of charade.
Scalia, of course, had good reason to expect such assent. His argument reflects the perspective of someone inside the cogntively illiberal state—who senses that motivated reasoning is shaping everyone else’s perceptions, and who accepts that it must also be shaping his, even if at any given moment he is unaware of its influence. We have all experienced this frame of mind. The critical question, though, is whether we really believe that what we are experiencing when we feel this way is inevitable and normal—a style of collective engagement with empirical evidence that should in fact be treated as normative, as Scalia asserts, for the performance of our institutions. I don’t think that we do . . . .
Will people who are culturally predisposed to reject human-caused climate change *believe* "97% consensus" social marketing campaign messages? Nope.
I’ve done a couple of posts recently on the latest CCP/APPC study on climate-science literacy.
The goal of the study was to contribute to development of a successor to “OCSI_1.0,” the “Ordinary Climate Science Intelligence” assessment (Kahan 2015). Like OCSI_1.0, OCSI_2.0 is intended to disentangle what ordinary members of the public “know” about climate science from their identity-expressive cultural predispositions, which is what items relating to “belief” in human-caused climate change measure.
In previous posts, I shared data, first, on the relationship between perceptions of scientific consensus, partisanship, and science comprehension; and second on the specific beliefs that members of the public, regardless of partisanship, hold about what climate scientists have established.
As pointed out in the last post, people with opposing cultural outlooks overwhelmingly agree about what “climate scientists think” on numerous specific propositions relating to the causes and consequences of human-caused climate change.
E.g., ordinary Americans—“liberal” and “conservative”—overwhelmingly agree that “climate scientists” have concluded that “human-caused global warming will result in flooding of many coastal regions.” True enough.
But they also agree, overwhelmingly, that climate scientists have concluded that “the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of skin cancer in human beings” and stifle “photosynthesis by plants.” Um, no.
These responses suggest that ordinary members of the public (again, regardless of their political orientation and regardless of whether they “believe” in climate change) get the basic gist of the weight of the evidence on human-caused global warming—viz., that our situation is dire—but have a pretty weak grasp of the details.
These items are patterned on science-literacy ones used to unconfound knowledge of evolutionary science from the identity-expressive answers people give to survey items on “belief” in human evolution. By attributing propositions to “climate scientists,” these questions don’t connote the sort of personal assent or agreement implied by “climate change belief items.”
Such questions thus avoid forcing respondents to choose between revealing what they “know” and expressing “who they are” as members of cultural groups whose identity is associated with pro- or con- attitudes toward assertions that human-caused climate change is putting society at risk.
The question “is there scientific consensus on climate change,” in contrast, doesn’t avoid forcing respondents to choose between revealing what they know and expressing who they are.
Accordingly, being perceived to hold beliefs at odds with the best available scientific evidence marks one out as an idiot. A familiar idiom in the discourse of contempt, the accusation that one’s cultural group (definite in terms of political outlooks, religiosity, etc.) is “anti-science” is a profound insult.
Thus, for someone who holds a cultural identity expressed by climate skepticism, a survey item equivalent to “true or false—there’s expert scientific consensus that human beings are causing global warming” is tantamount to the statement “well, you and everyone you respect are genuine morons—isn’t that so?”
People with that identity predictably answer no, there isn’t scientific consensus on global warming—because that question, unlike more particular ones relating to what “climate scientists believe,” measures who they are, not what they know (or think they know) about science’s understanding of the impact of human activity on climate change.
Messaging "scientific consensus" actually reinforces the partisan branding of positions on climate change, and thus frustrates efforts to promote public engagement with the best available evidence on how climate change is threatening their well-being.
Or that’s how I understood the best available evidence before conducting this study.
But maybe I’m wrong. If I am, I’d want to know that; and I’d want others to know it, too, particularly insofar as I’ve made my findings in the past known and have reason to think that people making practical decisions—important ones—might well be relying on them.
So in addition to collecting data on what people “believe” about human-caused global warming and on what they perceive climate scientists to believe, we showed study subjects (members of a large, nationally representative sample) an example of the kind materials featured in “97% consensus” social-marketing campaigns.
Specifically, we showed them this graphic, which was prepared for the AAAS by researchers who advised them that disseminating it would help to “increase acceptance of human caused climate change.”
We then simply asked those who had been shown the AAAS message “do you believe the statement '97% of climate scientists have concluded that human activity is causing global climate change' ”?
Overall, only 55% of the subjects said “yes.”
That would be a great showing for a candidate in the New Hampshire presidential primary. But my guess is that AAAS, the nation’s premier membership association for scientists, would not be very happy to learn that 45% of those who were told what the organization has to say about the weight of scientific opinion on one of the most consequential science issues of our day indicated that they thought AAAS wasn't giving them the straight facts.
What’s more, we know that the percentage of people who already believe in human-caused climate change is about 55%, and that the issue is one characterized by extreme political polarization.
So it's pretty obvious that if one is genuinely trying to gauge the potential effectiveness of this “messaging strategy,” one should assess what impact it will have on people whose political outlooks predispose them not to believe in human-caused climate change.
Here’s the answer:
Basically, the more conservative a person is, the less likely that individual is to believe the AAAS's magical "science communication" pie chart.
Unsurprisingly, this resistance to accepting the AAAS “message” is most intense among white male conservatives, the group in which denial of climate change is strongest (McCright & Dunlap 2012).
Or really just to make things simple, the only people inclined to believe the science communication being "socially marketed" in this way are those who are already inclined to believe (and almost certainly already do believe) in human-caused climate change.
Could this really be a surprise? By now, nearly a decade after the first $300 million "consensus" marketing campaign, those who reject climate change are surely very experienced at discounting the credibility of those who are "marketing" this "message."
Now, remember, these are the same respondents who, regardless of their political outlooks, overwhelmingly agree with propositions attributing to “climate scientists” all manner of dire prediction, true or false, about the impact of human-caused climate change.
There's a straightforward explanation for these opposing reactions.
People understand agreeing with fine-grained, particular test items to convey their familiarity with what climate scientists are saying.
They understand accepting “97% consensus messaging” as assenting to the charge that they and others who share their cultural identity are cretins, morons—socially incompetent actors worthy of ridicule.
Far from promoting acceptance of scientific consensus by persons with this identity, the contempt exuded by this form of "messaging" reinforces the resonances that make climate skepticism such a potent symbol of commitment to their group.
It’s patently ridiculous to think that “97% messaging” will change the minds of rather than antagonize these individuals, who make up the bulk of the climate-skeptical population.
Indeed, the probability that a conservative Republican who rejects human-caused climate change will believe the AAAS message is lower than the probability that he or she will already believe that there’s scientific consensus on climate change.
This “message” was one designed by social marketers who produced research that they characterize as showing that 97% consensus messaging “increased belief in climate change” in a U.S. general population sample.
Except that’s not what the researchers’ studies found. The "97% message" increased study subjects' estimates of the precise numerical percentage of climate scientists who subscribe to the consensus position. But the researchers did not find an increase in the proportion of study subjects who said they themselves "believe" human activity is causing climate change.
Empirical research is indeed essential to promoting constructive public engagement with scientific consensus on climate change.
But studies can do that only if researchers report all of their findings, and describe their results in a straightforward and non-misleading way.
When, in contrast, science communication researchers treat their own studies as a form of “messaging,” they only mislead and confuse people who need their help.
McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, 1-16 (2012).
C'mon down! Let's talk about culture, rationality & the tragedy of the #scicomm commons today at Mizzou
If you can't make it, this will probably give you a decent idea of what I'm thinking of saying.
"They already got the memo" part 2: More data on the *public consensus* on what "climate scientists think" about human-caused global warming
Yesterday I shared some data on the extent to which ordinary members of the public are politically polarized both on human-caused global warming and on the nature of scientific consensus relating to the same.
I said I was surprised b/c there was less division over whether “expert climate scientists” agree that human behavior is causing the earth’s temperature to rise.
Because Americans-- particularly those who display the greatest proficiency in science comprehension-- are less likely to disagree on whether there's scientific consensus than on whether human beings are causing global warming, it's not very compelling to think confusion about the former proposition is the "cause" of the latter.
But there is still a huge amount of polarization on whether there is scientific consensus on human-caused climate change.
Answers to these two questions -- are humans causing climate change? do scientists believe that? -- are still most plausibly viewed as being caused by a single, unobserved or latent disposition: namvely, a general pro- or con- affective orientation toward "climate change" that reflects the social meaning positions on this issue has within a person's identity-defining affinity groups.
Or in other words, the questions "is human climate change happening" and "is there scientific consensus on human-caused climate change" both measure who a person is, politically speaking.
That's a different thing from what members of the public know about climate science. To measure that requires a valid climate-science comprehension instrument.
The study in which we collected these data was a follow up of an earlier CCP-APPC one that featured the “Ordinary Climate Science Intelligence” assessment, or OCSI_1.0.
The goal of OCSI_1.0 was to disentangle the measurement of “who people are”—the responses toward climate change that evince the affective stance toward climate change characteristic of their cultural group—from “what they know” about climate science.
The current study is part of the effort to develop OSI_2.0, the aim of which is to discern differences across a larger portion of the range of knowledge levels within the general population.
Here is how 600 subjects U.S. adults drawn from a nationally representative panel) responded to some of the OSI_2.0 candidate items.
For me, these are the key points:
First, there’s barely any partisan disagreement over what climate scientists believe about the specific causes and consequences of human-caused climate change.
Sure, there’s some daylight between the response of the left-leaning and right-leaning respondents. But the differences are trivial compared to the ones in these same respondents’ beliefs about both the existence of climate change and the nature of scientific consensus.
There is “bipartisan” public consensus in perceptions of what climate scientists “know,” with minor differences only in the intensity with which respondents of opposing outlooks hold those particular impressions.
Second, ordinary members of the public, regardless of what they "believe" about human-caused climate change, know pitifully little about the basic causes and consequences of global warming.
Yes, a substantial majority of respondents, of diverse political views, know that climate scientists understand fossil-fuel CO2 emissions to be warming the planet, and that climate scientists expect rising temperatures to result in flooding in many regions.
But they also mistakenly believe that, “according to climate scientists, the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will increase the risk of leukemia” and “skin cancer in human beings, and “reduce photosynthesis by plants.”
They think, incorrectly, that climate scientists have determined that “a warmer climate over the next few decades will increase water evaporation, which will lead to an overall decrease of global sea levels.”
“Republican” and “Democrat” alike also mistakenly attribute to “climate scientists” the proposition that “human-caused global warming has increased the number of tornadoes in recent decades,” a claim that Bill Nye “science guy” believes but that actual climate scientists don’t, and in fact regularly criticize advocates for leaping up to assert every time a tornado kills dozens of people in one of the plains states.
Third, the overwhelming majority of ordinary citizens, regardless of their political persuasions, agree that climate scientists have concluded that global warming is putting human beings in grave danger.
The candidate OSI_2.0 items (only a portion of which are featured here) form two scales.
When one counts up the number of correct responses, OSI_2.0 measures how much people genuinely know about the basic causes and consequences of human-caused global warming. You can figure out that by scoring.
Alternatively, when one counts up the number of responses, correct or incorrect, that evince a perception of the risks that human-caused climate change poses, OSI_2.0 measures how dreaded climate change is as a societal risk.
No matter what they “believe” about human-caused climate change, very few people do well on the first, knowledge-based scale.
And no matter what they “believe” about human-caused climate change, the vast majority of them score extremely high on the second, dreadedness scale.
None of this should come as a surprise. This is exactly the state of affairs revealed by OSI_1.0.
Now in fact, one might think that it’s perfectly fine that ordinary citizens score higher on the “climate change dredadedness” scale than they do on the “climate change science comprehension” one. Ordinary citizens only need to know the essential gist of what climate scientists are telling them--that global warming poses serious risks that threaten things of value to them, including the health and prosperity of themselves and others; it’s those who ordinary citizens charge with crafting effective solutions (ones consistent with the democratic aggregation of diverse citizens' values) who have to get all the details straight.
The problem though is that democratic political discourse over climate change (in most but not all places) doesn’t measure either what ordinary people know or what they feel about climate change.
It measures what the item on “belief in” climate change does: who they are, whose side they are on, in an ugly, pointless, cultural status competition being orchestrated by professional conflict entrepreneurs.
The “science communication problem” for climate change is how to steer the national discussion away from the myriad actors-- all of them--whose style of discourse creates these antagonistic social meanings.
“97% consensus” social marketing campaigns (studies with only partially and misleadingly reported results notwithstanding) aren’t telling ordinary Americans on either side of the “climate change debate” anything they haven't already heard & indeed accepted: that climate scientists believe human-caused global warming is putting them in a position of extreme peril.
All the "social marketing" of "scientific consensus" does is augment the toxic idioms of contempt that are poisoning our science communication environment.
The unmistkable social meaning of the material featuring this "message" (not to mention the cultural conflict bottom-feeders who make a living "debating" this issue on talk shows) is that "you and people who share your identity are morons." It's not "science communication"; it's a clownish bumper sticker that says, "fuck you."
It is precisely because of the assaultive, culturally partisan resonances that this "message" conveys that people respond to the question "is there scientific consensus on global warming?" by expressing who they are rather than what they know about climate change risks.
More on that “tomorrow.”
As their science comprehension increases, do members of the public (a) become more likely to recognize scientific consensus exists on human-caused climate change; (b) become more politically polarized on whether human-caused climate change is happening; or (c) both?!
The study is a follow up to an earlier CCP/APPC study, which investigated whether it is possible to disentangle what people know about climate science from who they are.
“Beliefs” about human-caused global warming are an expression of the latter, and are in fact wholly unconnected to the former. People who say they “don’t believe” in human-caused climate change are as likely (which is to say, extremely likely) to know that human-generated CO2 warms the earth’s atmosphere as are those who say they do “believe in” human-caused climate change.
They are also both as likely-- which is to say again, extremely likely--to harbor comically absurd misunderstandings of climate science: e.g., that human generated CO2 emissions stifles photosynthesis in plants, and that human-caused global warming is expected to cause epidemics of skin cancer.
In other words, no matter what they say they “believe” about climate change, most Americans don’t really know anything about the rudiments of climate science. They just know -- pretty much every last one of them--that climate scientists believe we are screwed.
The small fraction of those who do know a lot—who can consistently identify what the best available evidence suggests about the causes and consequences of human-caused climate change—are also the most polarized in their professed “beliefs” about climate change.
The central goal of this study was to see what “belief in scientific consensus” measures—to see how it relates to both knowledge of climate science and cultural identity.
I’ll get to what we learned about that "tomorrow."
But today I want to show everybody something else that surprised the bejeebers out of me.
Usually when I & my collaborators do a study, we try to pit two plausible but mutually inconsistent hypotheses against each other. I might expect one to be more likely than the other, but I don’t expect anyone including myself to be really “surprised” by the study outcome, no matter what it is.
Many more things are plausible than are true, and in my view, extricating the latter from the sea of the former—lest we drown in a sea of “just so” stories—is the primary mission of empirical studies.
But still, now and then I get whapped in the face by something I really didn’t see coming!
This finding is like that.
But to set it up, here's a related finding that's interesting but not totally shocking.
It’s that the association between identity and perceptions of scientific consensus on climate change, while plenty strong, is not as strong as the association between identity and “beliefs” in human-caused climate change.
This means that “left-leaning” individuals—the ones predisposed to believe in human-caused climate change—are more likely to believe in human caused climate change than to believe there is scientific consensus, while the right-leaning ones—the ones who are predisposed to be skeptical—are more likely to believe that there is scientific consensus that humans are causing climate change than to actually “believe in” it themselves.
Interesting, but still not mind-blowing.
Here’s the truly shocking part:
First, as science comprehension goes up, people become more polarized on climate change.
Still not surprising; that’s old, old, old, old news.
But second, as science comprehension goes up, so does the perception that there is scientific consensus on climate change—no matter what people’s political outlooks are!
Accordingly, as relatively “right-leaning” individuals become progressively more proficient in making sense of scientific information (a facility reflected in their scores on the Ordinary Science Intelligence assessment, which puts a heavy emphasis on critical reasoning skills), they become simultaneously more likely to believe there is “scientific consensus” on human-caused climate change but less likely to “believe” in it themselves!
Whoa!!! What gives??
One thing that is clear from these data is that it’s ridiculous to claim that “unfamiliarity” with scientific consensus on climate change “causes” non-acceptance of human-caused global warming.
But that shouldn’t surprise anyone. The idea that public conflict over climate change persists because, even after years and years of “consensus messaging” (including a $300 million social-marketing campaign by Al Gore’s “Alliance for Climate Protection”), ordinary Americans still just “haven’t heard” yet that an overwhelming majority climate scientists believe in AGW is patently absurd.
(Are you under the impression that there are studies showing that telling someone who doesn't believe in climate change that “97% of scientists accept AGW” will cause him or her to change positions? No study has ever found that, at least with a US general public sample. All the studies in question show -- once the mystifying cloud of meaningless path models & 0-100 "certaintly level" measures has been dispelled-- is that immediately after being told that “97% of climate scientists believe in human-caused climate change,” study subjects will compliantly spit back a higher estimate of the percentage of climate scientists who accept AGW. You wouldn't know it from reading the published papers, but the experiments actually didn’t find that the “message” changed the proportion of subjects who said they “believe in" human caused climate change....)
These new data, though, show that acceptance of “scientific consensus” in fact has a weaker relationship to beliefs in climate change in right-leaning members of the public than it does in left-leaning ones.
That I just didn’t see coming.
I can come up w/ various “explanations,” but really, I don’t know what to make of this!
Actually, in any good study the ratio of “weird new puzzles created” to “existing puzzles (provisionally) solved” is always about 5:1.
That’s great, because it would be really boring to run out of things to puzzle over.
And it should go without saying that learning the truth and conveying it (all of it) accurately are the only way to enable free, reasoning people to use science to improve their lives.