Got back to New Haven CT Wed. for first time since Jan. to give a lecture to cognitive science program undergraduates. Since the lecture was on the science communication undergraduates.
The lecture (slides here) was on the Science of Science Communication. I figured the best way to explain what it was was just to do it. So I decided to present data on three cool things:
1. MS2R (aka, "motivated system 2 reasoning").
Contrary to what many decision science expositors assume, identity-protective cognition is not attributatle to overreliance on heuristic, "System 1" reasoning. On the contary, studies using a variety of measures and using both observational and experimental methods support the conclusion that the effortful, conscious reasoning associated with "System 2" processing magnify the disposition to selectively credit and dismssis evidence in patterns that conform one's assessment of contested societal risks into alignment with those of other with whom shares important group ties.
Why? Because it's rational to process information this way: the stake ordinary indidivudals have in forming beliefs that convincingly evince their group commitments is bigger than the stake they have in forming "correct" understandings of facts on risks that nothing they personally do--as consumers, voters, tireless advocates in blog post comment sections etc--will materially affect.
If you want to fix that--and you should; when everyone processes information this way, citizens in a diverse democratic society are less likely to converge on valid scientific evidence essential to their common welfare--then you have to eliminate the antagonistic social meanings that turn positions on disputed issues of fact into badges of group membership and loyalty.
2. The science communication measurement problem
There are several
One is, What does belief in "human caused climate change" measure?
The answer is, Not what you know but who you are.
A second is, How can we measure what people know about climate change independently of who they are?
The final one is, How can we unconfound identity and knowledge from what politics meaures when culturally diverse citizens address the issue of climate change?
The answer is ... you tell me, and I'll measure.
3. Identity-protective reasoning and professional judgment
Not according to an experimental study by the Cultural Cogniton Project, which found that judges who were as culturally divided as members of the public on the risks posed by climate change, the dangers of legalizing marijuana, etc., nevertheless converged on the answers to statutory intepretation problems that generated intense motivated-reasoning effects among members of the public.
Lawyers also seemed largely immune to identity-protective reasoning in the experiment, while law students seemed to be affected by an intermediate degree.
The result was consistent with the hypothesis that professional judgment--habits of mind that enable and motivate recogniton of considerations relevant to making expert determinations--largely displaces identity-protective cognition when specialists are making in-domain determinations.
Combined with other studies showing how readily members of the public will display identity-protective reasoninng when assessing culturally contested facts, the study suggests that judges are likely more "neutral" than citizens perceive.
But precisely because citizens lack the professional habits of mind that make the neutrality of such decisons apparent to them, the law will have a "neutrality communication problem" akin to the "science communication problem" that scientists have in communicating valid science to private citizens who lack the professional judgment to reccognize the same.
I've uploaded the dataset, along with codebook, for the data featured in Kahan, D.M. "Ordinary Science Intelligence": A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res. (in press). Enjoy!
Speak of the devil -- it's hit the newstands! Get your copy now before they sell out!
Weekend update: modeling the impact of the "according to climate scientists prefix" on identity-expressive vs. science-knowledge revealing responses to climate science literacy items
Basically, the question is what to make of the respondents at the very highest levels of Ordinary Science Intelligence.
When the prefix "according to climate scientists" is appended to the items, those individuals are the most likely to get the "correct" response, regardless of their political outlooks. That's clear enough.
It's also bright & clear that when the prefix is removed, subjects at all levels of OSI are more disposed to select the identity-expressive answer, whether right or wrong.
What's more those highest in OSI seem even more disposed to select the identity-expressive "wrong" answer than those modest in that ability. Insfar as they are the ones most capable of getting the right answer when the prefix is appended, they necessarily evince the strongest tendency to substitute the incorrect identity-expressive for the correct, science-knowledge-evincing response when the prefix is removed.
But are those who are at the very tippy top of the OSI hierarchy resisting the impulse (or the consciously perceived opportunity) to respond in an identity-protective manner--by selecting the incorrect but ideologically congenial answer-- when the prefix is removed? Is that what the little little upward curls mean at the far right end of the dashed line for right-leaning subjects in "flooding" and for left-leaning ones in "nuclear"?
Well, one way to try to sort this out is by modeling the data.
The locally weighted regression just tells us the mean probabilities of "correct" answers at tiny little increments of OSI. A logistic regression model can show us how the precision of the estimated means--the information we need to try to ferret out signal from noise-- is affected by the number of observations, which necessarily get smaller as one approaches the upper end of the Ordinary Science Intelligence scale.
This one plots the predicted probability of correctly answering the items with and without the prefix for subjects with the specified political orientations as their OSI scores increase:
This one illustrates, again in relation to OSI, how much more likely someone is to select the incorrect, identity-expressive response for the no-prefix version than he or she is to select the incorrect response for the prefix version:
The graphic shows us just how much the confounding of identity and knowledge in a survey item can distort measurement of how likely an individual is to know climate-science propositions that run contrary to his or her ideological predisposition on global warming.
I think the results are ... interesting.
What do you think?
To avoid discussion forking (the second leading cause of microcephaly in the Neterhlands Antilles), I'm closing off comments here. Say your piece in the thread for "yesterday's" post.
Toggling the switch between cognitive engagement with "America's two climate changes"--not so hard in *the lab*
So I had a blast last night talking about “America’s 2 climate changes” at the 14 Annual “Climate Predication Applications Workshop,” hosted by NOAA’s National Weather Service Climate Services Branch, in Burlington Vermont (slides here).
It’s really great when after a 45-minute talk (delivered in a record-breaking 75 mins) a science-communication professional stands up & crystallizes your remarks in a 15-second summary that makes even you form a clearer view of what you are trying to say! Thanks, David Herring!
In sum, the “2 climate changes” thesis is that there are two ways in which people engage information about climate change in America: to express who they are as members of groups for whom opposing positions on the issue are badges of membership in one or another competing cultural group; and to make sense of scientific information that is relevant to doing things of practical importantance—from being a successful farmer to protecting their communities from threats to vital natural resources to exploiting distinctive commercial opportunities—that are affected by how climate is changing as a result of the influence of humans on the environment.
I went through various sorts of evidence—including what Kentucky Farmer has to say about “believing in climate change” when he is in his living room versus when he is on his tractor.
Also the inspired leadership in Southeast Florida, which has managed to ban conversation of the “climate change” that puts the question “who are you, whose side are you on?” in order to enable conversation of the “climate change” which asks “what do we know, what should we do?”
But I also featured some experimental data that helped to show how one can elicit one or the other climate change in ordinary study respondents.
The data came from the study (mentioned a few times in previous entries) that CCP and the Annenberg Public Policy Center conducted to refine the Ordinary Climate Science Intelligence assessment (“OSI_1.0”).
OSI_1.0 used a trick from the study of public comprehension of evolutionary science to “unconfound” the measurement of “knowledge” and “identity.”
It’s well established that there is no correlation between the answer survey respondents give to questions about their belief in (acceptance of) human evolution and what they understand about science in general or evolutionary science in particular. No matter how much or little individuals understand about science’s account of the natural history of human beings, those who have a cultural identity that features religiosity answer “false” to the statement “human beings evolved from an earlier species of animals,” and those who have a cultural identity that doesn’t say “true.”
But things change when one adds the prefix “according to the theory of evolution” to the standard true-false survey item:
At that point, religious individuals who manifest their identity-expressive disbelief in evolution by answering “false” can now reveal they are in fact familiar with science’s account of the natural history of human beings (even if they, like the vast majority of those who answer “true” with or without the prefix, couldn’t pass a high school biology exam that tested their comprehension of the modern synthesis).
What people say they “believe” about climate change (at least if they are members of the general public in the US) is likewise an expression of who they are, not what they know.
That is, responses to recognizable climate-change survey items—“is it happening,” “are humans causing it,” “are we all going to die,” “what’s the risk on a scale of 0-10,” etc.— are all simply indicators of a latent cultural disposition. The disposition is easily enough measured with right-left political orientation measures, but cultural worldviews are even better and no doubt plenty of other things (even religiosity) work too.
There isn’t any general correlation—positive or negative—between how much people know either about science in general or about climate-science in particular and their “belief” in human-caused climate change.
But there is an interaction between their capacity for making sense of science and their cultural predispositions. The greater a person’s proficiency in one or another science-related reasoning capacity (cognitive reflection, numeracy, etc.) the stronger the relationship between their cultural identity (“who they are”) and what they say they “believe” etc. about human-caused climate change.
Why? Presumably because people can be expected to avail themselves of all their mental acuity to form beliefs that reliably convey their membership in and commitment to the communities they depend on most for psychic and material support.
But if one wants to “unconfounded” identity-expressive from knowledge-evincing responses on climate change, one can use the same trick that one uses to accomplish this objective in measuring comprehension of evolutionary science. OSI_1.0 added the clause “climate scientists believe” to its batery of true-false items on the causes and consequences of human-caused climate change. And lo and behold, individuals of opposing political orientations—and hence opposing “beliefs” about human-caused climate change—turned out to have essentially the equivalent understandings of what “climate science” knows.
In general, their understandings turned out to be abysmal: the vast majority of subjects—regardless of their political outlooks or beliefs on climate change—indicated that “climate scientists believe” that human CO2 emissions stifle photosynthesis, that global warming will cause skin cancer, etc.
Only individuals at the very highest levels of science comprehension (as measured by the Ordinary Science Intelligence assessment) consistently distinguished genuine from bogus assertions about the causes and consequences of climate change. Their responses were likewise free of the polarization--even though they are the people in whom there is the greatest political division on “belief in” human-caused climate change.
But in collecting data for OSI_2.0, we decided to measure exactly how much of an impact it makes in response to use the identity-knowledge “scientists believe” unconfounding device.
The impact is huge!
Here are a couple of examples of just how much a difference it makes:
Subjects of opposing political outlooks—and hence opposing “beliefs” about human-caused climate change--don't disagree about whether “human-caused global warming will result in flooding of many coastal regions” or whether “nuclear power generation contributes to global warming” when those true-false statements are introduced with the prefix “according to climate scientists” (obviously, the "nuclear" item is a lot harder--that is, people on average, regardless of political outlook, are about as likely to get it wrong as right; "flooding" is a piece of cake).
But when the prefix is removed, subjects of opposing outlooks answer the questions in an (incorrect) manner that evinces their identity-expressive views.
That prefix is all it takes to toggle the switch between an “identity-expressive” and a “science-knowledge-evincing” orientation toward the items.
All it takes to show that for ordinary members of the public there are two climate changes: one on which their beliefs express “who they are” as members of opposing cultural groups; and another on which their beliefs reflect “what they know” as people who use their reason to acquire their (imperfect in many cases) comprehension of what science knows about the impact of human behavior on climate change.
Now what’s really cool about this pairing is the opposing identity-knowledge "valencess" of the items. The one on flooding shows how the “according to climate scientists" prefix unconfounds climate-science knowledge from a mistaken identity-expressive “belief” characteristic of a climate-skeptical cultural style. The item on nuclear power, in contrast, uncounfounds climate-science knowledge from a mistaken identity-expressive “belief” characteristic of a climate-concerned style.
I like this because it answers the objection—one some people reasonably raised—that adding the “scientists believe” clause to OSI_1.0 items didn't truly elicit climate-science knowledge in right-leaning subjects. The right-leaning subjects, the argument went, were attributing to climate scientists views that right-leaning subjects themselves think are contrary to scientific evidence but that they think climate scientists espouse becasuse climate scientists are so deceitful, misinformed etc.
I can certainly see why people might offer this explanation.
But it seems odd to me to think that right-leaning subjects would in that case make the same mistakes about climate scientists' positions (e.g., that global warming will cause skin cancer, and stifle photosynthesis) that left-leaning ones would; and even more strange that only right-leaning subjects of low to modest science comprehension would impute to climate scientists these comically misguided overstatements of risk, insofar as high science-comprehending, right-leaning subjects are the most climate skeptical & thus presumably most distrustful of "climate scientists."
Well, these data are even harder to square with this alternative account of why OSI_1.0 avoided eliciting politically polarized responses.
One could still say "well, conservatives just think climate scientsts are full of shit," of course, in response to the effect of removing the prefix for the “flooding” item.
But on the “nuclear power causes climate change” item, left-leaning subjects were the ones whose responses shifted strongly in the identity-expressive direction when the “according to climate scientists prefix” was removed. Surely we aren’t supposed to think that left-leaning, climate-concerned subjects find climate scientists untrustworthy, corrupt etc. , too!
The more plausible inference is that the “according to science prefix” does exactly what it is supposed to: unconfound climate-science knowledge and cultural identity, for everyone.
Thus, if one is culturally predisposed to give climate-skeptical answers to express identity, the prefix stifles incorrect "climate science comprehension" responses that evince climate skepticism—e.g., that climate change will cause flooding.
If one is culturally predisposed to give climate-concerned responses, in contrast, then the prefix stifles what would be the identity-expressive inclination to express incorrect beliefs about the contribution of human activities to climate change—e.g., that nuclear power is warming the planet.
The prefix turns everyone from who he or she is when processing information for identity protection into the person he or she is when induced to reveal whatever "science knowledge" he or she has acquired.
This inference is reinforced by considering how these responses interact with science comprehension.
As can be seen, for the "prefix" versions of the items, individuals of both left- and right-leaning orientations are progressively more likely to give correct "climate science comprehension" answers as their OSI scores increase. This makes a big difference on the “nuclear power” item, because it’s a lot harder than the “flooding” one.
Nevertheless, when the “prefix” is removed, those who are high in science comprehension (right-leaning or left-) are the most likely to get the wrong answer when the wrong answer is identity-expressive!
That’s exactly what one would expect if the prefix were functioning to suppress an identity-expressive response, since those high in OSI are the most likely to form identity-expressive beliefs as a result of motivated reasoning.
Suppressing such a response, of course, is what the “according to scientists” clause is supposed to do as an identity/science-knowledge unconfounding device.
This result is exactly the opposite of what one would expect to see, though, under the alternative, “just measuring conservative distrust of/disagreement with climate scientists” explanation of the effect of the prefix: the subjects who such an explanation implies ought to be most likely to attribute an absurdly mistaken "climate concerned" position to climate scientists--the right-leaning subjects highest in science comprehension--were in fact the least likely to do so.
But it was definitely very informative to look more closely at this issue.
Indeed, how readily one can modify the nature of the information processing that subjects are engaging in—how easily one can switch off identity-expression and turn on knowledge-revealing—is pretty damn amazing.
Of course, this was done in the lab. The million dollar question is how to do it in the political world so that we can rid our society once and for all of illiberal, degrading, welfare-annihilating consequences of the first climate change. . . .
Some modeling of these data here.
"America's two climate changes ..." & how science communicators should/shouldn't address them ... today in Burlington, VT
Beth Garrett, President of Cornell University, died last week.
Being President of Cornell, a great university with a passionate sense of curiosity as boundless as hers, was the latest in the string of amazing things that she did in her professional life.
I met Beth when I started my clerkship for Justice Thurgood Marshall. She was ending hers, and for a couple of weeks of overlap she helped me to try to make sense of what the job would entail.
For sure she imparted some useful "how to's."
But the most important thing she conveyed was her attitude: her happy determination to figure out whatever novel, complex thing had to be understood to do the job right; her unself-conscious confidence that she could; and her excitement over the opportunity to do so.
The lesson continued when we were "baby professors" starting out at the University of Chicago Law School. Those same virtues -- the resolve to figure out whatever it was she didn't already know but needed to in order to make sense of something that perplexed her; the same confidence that she could learn whatever she had to to do that; and the same pleasure at the prospect of undertaking such a task -- characterized her style as a scholar.
These same atttributes contributed, of course, to her success in mastering the new challenges she took on thereafter in her career as a university administrator, first as Provost at the University of Southern California and then as President of Cornell.
But those opportunities also came her way because of all the other excellent qualities of character she possessed. Among these was her incisive apprehension of how scholarly communities could become the very best versions of themselves, and her capacity to inspire their members to reciprocate the efforts she tirelessly (but always happily, cheerfully!) made to helping them realize that aspiration.
Every person who was fortunate enough to have had some connection to Beth must now endure a disorienting sense of sadness and shock, bewilderment and resentment, at her premature death.
But after the grief retires to its proper place in the registry of their emotional-life experiences, every one of those persons will enjoy for the rest of their lives the benefit of being able to summon the inspiring and joyful example of Beth Garrett and using their memories of her to help guide and motivate them to be the best versions of themselves.
Weekend update: Another lesson from SE Fla Climate Political Science, this one on "banning 'climate change' " from political discourse
From something I'm working on ...
The most important, and most surprising, insight we have gleaned from studying climate science communication in Southeast Florida is that there is not one “climate change” in that region but two.
The first is the “climate change” in relation to which ordinary citizens “take positions” to define and express their identities as members of opposing cultural groups (ones that largely mirror national ones but that have certain distinctive local qualities) who harbor deep-seated disagreements about the nature of the best way to live. The other “climate change” is the one that everyone in Southeast Florida, regardless of their cultural outlook, has to live with--the one that they all understand and accept poses major challenges, the surmounting of which will depend on effective collective action (Kahan 2015a).
Each “climate change” has its own conversation.
For the first, the question under discussion is “who are you, whose side are you on?” For the second, it is “what do we know, what do we do?”
In Southeast Florida, at least, the only “climate change” discussion that has been “banned” from political discourse is the first one. Silencing this polarizing style of engagement is exactly what has made it possible for the region’s politically diverse citizens to engage in the second, unifying discussion of climate change aimed at exploiting what science knows about how to protect their common interests.
This development in the region’s political culture (one that is by no means complete or irreversible) didn’t occur by accident. It was accomplished through inspired, persistent, informed leadership . . . .
"Monetary preference falsification": a thought experiment to test the validity of adding monetary incentives to politically motivated reasoning experiments
From something or other and basically an amplification of point from Kahan (in press)
1. Monetary preference falsification
Imagine I am solicited and agree to participate in an experiment by researchers associated with the “Moon Walk Hoax Society,” which is dedicated to “exposing the massive fraud perpetrated by the U.S. government, in complicity with the United Nations, in disseminating the misimpression that human beings visited the Moon in 1969 or at any point thereafter.” These researchers present me with a "study" containing what I’m sure are bogus empirical data suggesting that a rocket the size of Apollo 11 could not have contained a sufficient amount of fuel to propel a spacecraft to the surface of the moon.
After I read the study, I am instructed that I will be asked questions about the inferences supported by the evidence I just examined and will be offered a monetary reward (one that I would actually find meaningful; I am not an M Turk worker, so it would have to be more than $0.10, but as a poor university professor, $1.50 might suffice) for “correct answers.” The questions all amount to whether the evidence presented supports the conclusion that the 1969 Moon-landing never happened.
Because I strongly suspect that the researchers believe that that is the “correct” answer, and because they’ve offered to pay me if I claim to agree, I indicate that the evidence—particularly the calculations that show a rocket loaded with as much fuel as would fit on the Apollo 11 could never have made it to the Moon—isvery persuasive proof that the 1969 Moon landing for sure didn't really occur.
If a large majority of the other experiment subjects respond the way I do, can we infer from the experiment that all the "unincentivized" responses that pollsters have collected on the belief that humans visited the Moon in 1969 are survey “artifacts,” & that the appearance of widespread public acceptance of this “fact” is “illusory” (Bullock, Gerber, Hill & Huber 2015)?
As any card-carrying member of the “Chicago School of Behavioral Economics, Incentive-Compatible Design Division” will tell you the answer is, "Hell no, you can't!"
Under these circumstances, we should anticipate that a great many subjects who didn’t find the presented evidence convincing will have said they did in order to earn money by supplying the response they anticipated the experimenters would pay them for.
Imagine further that the researchers offered the subjects the opportunity, after they completed the portion of the experiment for which they were offered incentives for “correct” answers, to indicate whether they found the evidence “credible.” Told that at this point there would be no “reward” for a “correct” answer or penalty for an incorrect one, the vast majority of the very subjects who said they thought the evidence proved that the moon landing was faked now reveal that they thought the study was a sham (Khanna & Sood 2016).
Obviously, it would be much more plausible to treat that "nonincentivized" answer as the one that finally revealed what all the respondents truly believed.
By their own logic, researchers who argue that monetary incentives can be used to test the validity of experiments on politically motivated reasoning invite exactly this response to their studies. These researchers might not have expectations as transparent or silly as those of the investigators who designed the "Moon walk hoax" public opinion study. But they are furnishing their subjects with exactly the same incentive: to make their best guess about what the experimenter will deem to be a "correct" response--not to reveal their own "true beliefs" about politically contested facts.
Studies as interesting as Khanna and Sood (2016) can substantially enrich scholarly inquiry. But seeing how requires looking past the patently unpersuasive claim that "incentive compatible methods" are suited for testing the external validity of politically motivated reasoning experiments (Bullock, Gerber, Hill & Huber 2015).
Kahan, D.M. The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences, (in press).
Khanna, Kabir & Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (2016), available at http://www.gsood.com/research/papers/partisanlearning.pdf.
WSMD? JA! Are science-curious people just *too politically moderate* to polarize as they get better at comprehending science?
This is approximately the 9,616th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
Observed in data from the CCP/Annenberg Public Policy Center Science of Science Filmmaking Initiative, the property of science in curiosity that has aroused so much curiosity among this site’s 14 billion regular subscribers (plus countless others) was its defiance of the “second law” of the science of science communication: motivated system 2 reasoning—also known by its catchy acronym, MS2R!
MS2R refers to the tendency of identity-protective reasoning—and as a result, cultural polarization—to grow in intensity in lock step with proficiency in the reasoning dispositions necessary to understand science. It is a pattern that has shown up time and again in the study of how people assess evidence relating to societally contested risks.
But as I showcased in the original post and reviewed "yesterday," science curiosity (measured with “SCS_1.0”) seems to break the mold: rather than amplify opposing states of belief, science curiosity exerts a uniform directional influence on perceptions of human-caused climate change and other putative risk sources in all people, regardless of their political orientations or level of science comprehension.
An intriguing, and appealing, surmise is that the appetite to learn new and surprising facts neutralizes the defensive information-processing style that identity-protective cognition comprises.
But this is really just a conjecture, one that is in desperate need of further study.
Such study, moreover, will be abetted, not thwarted, by the articulation of plausible alternative hypotheses. The best empirical studies are designed so that no matter what result they generate we’ll have more reason than we did before to credit one hypothesis relative to one or more rival ones.
In this spirit, I solicited commentators to suggest some plausible alternative explanations for the observed quality of science curiosity.
I talked about one of those "yesterday": the possibility that science curiosity might exert an apparent moderating effect only because in fact those high in science curiosity aren’t uniformly proficient enough in science comprehension to bend evidence in the direction necessary to fit positions congenial to their identities.
As I explained, I don’t think that’s true: again, the evidence in the existing dataset, which was assembled in Study 1 of the CCP/APPC “science of science filmmaking initiative,” seems to show that science curiosity moderates science comprehension’s magnification of political polarization even in those subjects who score highest in an assessment (the Ordinary Science Intelligence scale) of that particular reasoning proficiency.
But that’s just a provisional assessment, of course.
Today I take up another explanation, viz., that “science-curious” individuals might be more politically moderate than science-incurious ones.
Based on how science curiosity affected views on climate change, @AaronMatch raised the possibility that “scientifically-curious conservatives” might be “more moderate than their conservative peers.”
This would indeed be an explanation at odds with the conjecture that science curiosity stifles or counteracts identity-protective cognition.
If people who are high in science curiosity happen to be disposed to adopt more moderate political stances than less curious people of comparable self-reported political orientations, then obviously increased science curiosity will not drive citizens of opposing self-reported political orientations apart—but not because curiosity affects how they process information but because curiosity is simply an indicator of being less intensely partisan than one might otherwise appear.
Do the data fit this surmise?
Arguably, @Aaron’s view reflects an overly “climate change centric” view of the data. Neither highly science-curious conservatives nor highly science-curious liberals seem “more moderate” than their less curious counterparts on the risks of handgun possession or unlawful entry of immigrants into the US, for example. In addition, if “moderation” for conservatives is defined as “tending toward the liberal point of view,” then higher science comprehension predicts that more strongly than higher science curiosity on the risks of legalizing marijuana and of pornography. . . .
But to really do justice to the “science-curious folks are more moderate” hypothesis, I think we’d have to see how science curiosity relates to various policy positions on which partisans tend to disagree. Then we could see if science-curious individuals do indeed adopt less extreme stances on those issues than do individuals who have the same score on “Left_right,” the scale that combines self-reported liberal-conservative ideology and political-party identification, but lower scores on SCS.
There weren’t any policy-position items in our “science of science documentary filmmaking” Study No. 1 . . . .
But of course we did collect cultural worldview data!
These can be used to do something pretty close to what I just described. The six-point “agree-disagree” CW items reflect values of fairly obvious political significance (e.g., “The government interferes far too much in our everyday lives”; “Our society would be better off if the distribution of wealth was more equal”). The “science curiosity = political moderation” thesis, then, should predict that relatively science curious individuals will be more “middling” in their cultural outlooks than individuals who are less science curious.
That doesn’t seem to be true, though.
These Figures plots separately for subjects above and below the mean on SCS, the science curiosity scale, the relationship of the study subjects’ scores on the cultural worldview scales in relation to their scores on “Left_right,” the composite measure formed by combining their responses to a five-point liberal-conservative ideology and a seven point party-identification item.
If relatively science-curious subjects were more politically “moderate” than relatively incurious subjects with equivalent self-reported left-right political orientations, then we’d expect the slope for the solid lines to be steeper than the dotted ones in these Figures. They aren’t. The slopes are basically the same.
Here are Figures that plot the probability that a subject with any a particular Left_right score will hold the cultural worldviews of an “egalitarian communitarian,” an “egalitarian individualist,” a “hierarchical communitarian,” or a “hierarchical individualist” – first for the sample overall, and then for subjects identified by their relative science curiosity.
The only noticeable difference between relatively curious and incurious subjects is how likely politically moderate ones are to be either “egalitarian individualists” or “hierarchical communitarians.”
I’m not sure what to make of this except to say that it isn’t what you’d expect to see if science-curious subjects were more politically moderate than science-incurious ones conditional on their political orientations. If that were so, then the differences in the probabilities of holding one or another combination of cultural outlooks would be concentrated at one or the other or both extremes, not the middle of, the Left_right political orientation scale.
To make this a bit more concrete, remember that the “cultural types” most polarized on climate change are egalitarian communitarians and hierarchical individualists.
Thus, in order for the “science curious => politically moderate” thesis to explain the observed effect of science curiosity in relation to partisan views on human-caused global warming, science-curious subjects located at the extremes of the Left_right measure would have to be less likely than science-incurious ones to be members of those cultural communities.
So I think based on the data on hand that it’s unlikely the impact of science curiosity in defying the law of MS2R is attributable to a correlation between that disposition and political moderation.
But as I said, the data on hand aren’t nearly as suited for testing that hypothesis as lots of other kinds would be. So for sure I’d keep this possibility in mind in designing future studies.
BTW, for purposes of highlighting science curiosity’s defiance of MS2R, I’ve been using Left_right as the latent-disposition measure that drives identity-protective cognition. But one can see the same thing if one uses cultural worldviews for that purpose.
Take a look:
Actually, these cultural worldview data make me want to say something—along the lines of something I said before once (or twice or five thousand times), but quite a while ago; before all but maybe 3 or 4 billion of the regular readers of this blog were even born!—about the relationship between left-right measures and the cutural cognition worldview scales.
And now that I think of it, it’s related to what I said the other day about alternative measures of the dispositions that drive identity-protective cognition. . . .
But fore sure, this is more than enough already for one blog post! I’ll have to come back to this “tomorrow.”
3.2 Operationalizing identity
Scholars have used diverse frameworks to measure the predispositions that inform politically motivated reasoning. Left-right political outlooks are the most common (e.g., Lodge & Taber 2013; Kahan 2013). “Cultural worldviews” are used in others studies (e.g., Bolsen, Druckman & Cook 2014; Druckman & Bolsen 2011; Kahan, Braman, Cohen, Gastil & Slovic 2010) that investigate “cultural cognition,” a theoretical operationalization of motivated reasoning directed at explaining conflict over societal risks (Kahan 2012).
The question whether politically motivated reasoning is “really” driven by “ideology” or “culture” or some other abstract basis of affinity is ill-posed. One might take the view that myriad commitments—including not only political and cultural outlooks but religiosity, race, gender, region of residence, among other things—figure in politically motivated reasoning on “certain occasions” or to “some extent.” But much better would be to recognize that none of these is the “true” source of the predispositions that inform politically motivated reasoning. Measures of “left-right” ideology, cultural worldviews, and the like are simply indicators of —imperfect, crude proxies for—a latent or unobserved shared disposition that orients information processing. Studies that use alternative predisposition constructs, then, are not testing alternative theories of “what” motivates politically motivated reasoning. They are simply employing alternative measures of whatever it is that does (Kahan, Peters et al. 2012).
The only reason there could be for preferring one scheme for operationalizing these predispositions over another is its explanatory, predictive, and prescriptive utility. One can try to explore this issue empirically, either by examining the psychometric properties of alternative latent-variable measures of motivating dispositions (Xue, Hine, Loi, Thorsteinsson, Phillips 2014) or simply by putting alternative ones to practical explanatory tests (Figure 4). But even these pragmatic criteria are unlikely to favor one predisposition measure across all contexts. The best test of whether a researcher is using the “right” construct is what she is able to do with it.
Bolsen, T., Druckman, J.N. & Cook, F.L. The influence of partisan motivated reasoning on public opinion. Polit Behav 36, 235-262 (2014).
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.
Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (ed. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424
Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).
Xue, W., Hine, D.W., Loi, N.M., Thorsteinsson, E.B. & Phillips, W.J. Cultural worldviews and environmental risk perceptions: A meta-analysis. Journal of Environmental Psychology 40, 249-258 (2014).
WSMD? JA! Do science-curious people just not *know* enough about science to be "good at" identity-protective cognition?
This is approximately the 4,386th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
So lots of curious commentators had questions about the data I previewed on the relationship between science curiosity, science comprehension, and political polarization. They posed really good questions that reflect opposing hypotheses about the dynamics that could have produced the intriguing patterns I showcased.
I don’t have the data (sadly, but also not sadly, since now I can figure out what to collect next time) that I’d really want to have to answer their questions, test their hypotheses. But I’ve got some stuff that’s relevant and might help to focus and inform the relevant conjectures.
I’ll start, though, by just briefly rehearsing what the cool observations were that triggered the reflective theorizing in the comment thread.
Here is the key graphic:
What it shows is that science comprehension (left panel for each pair) and science curiosity (right) have different impacts on the extent of partisan disagreement over contested societal risks.
Science comprehension (here measured with the "Ordinary Science Intelligence" assessment) magnifies polarization. This is not news; this sad feature of the class of societal risks that excite cultural division (that class is limited!) is something researchers have known about for a long time.
But science curiosity doesn’t have that effect. Obviously, the respondents who are most science-curious are not converging in a dramatic way. But the patterns observed here—that science curiosity basically moves diverse respondents in the same general direction in regard to their assessment of disputed risks—suggests that individuals who are high in that particular disposition are basically processing information in a similar way.
That’s pretty radical. Because pretty much all manner of reasoning proficiency related to science comprehension does seem to be associated with greater polarization—so to find one that isn’t is startling, intriguing, encouraging & for sure something that that cries out for explanation and further interrogation.
In the post, I speculated that science curiosity might be a cognitive antidote to politically motivated reasoning: in those who experience this appetite intensely, the anticipated pleasure of being surprised displaces the defensive style of information processing that people (especially those proficient in critical reasoning) employ to deflect assent to information that might challenge a belief integral to their identities as members of one or another cultural group.
But responding to my invitation, commentators helpfully offered some alternative explanations.
I think I can shed some light on a couple of those alternatives.
Not a dazzling amount of light but a flicker or two. Enough to make the outlines of this strange, intriguing thing slightly more definite than they were in the original post—but without making them nearly clear enough to extinguish the curiosity of anyone who might be impelled by the appetite for surprise to probe more deeply . . . .
Actually, there are two specific conjectures I want to consider:
1. @AndyWest: Is the impact of science curiosity in mitigating polarization reduced to individuals who are low in science comprehension?
2. @AaronMatch: Are “science-curious” individuals more politically moderate than science-noncurious ones?
I’ll take up @AndyWest’s query today & return to @AaronMatch’s “tomorrow.”
* * *
So: @AndyWest suggests, in effect, that the patterns observed in the data might have nothing really to do with the effect of science curiosity on information processing but only with the effects of greater science comprehension in stimulating polarization about climate change.
Those who know more about a particular domain of contested science, such as that surrounding climate change, use that knowledge (opportunistically) to protect their identities more aggressively and completely than those who know less. That’s why increased science comprehension is associated with greater polarization.
Because science curiosity (as I indicated) is only modestly correlated with science comprehension, we wouldn’t see magnified polarization as science curiosity alone increases. Indeed, for sure we wouldn’t see it in my graphics, which illustrated the respective impact of science comprehension and science curiosity controlling for the other (i.e., setting the predictor value for the other at its mean in the sample).
But the reason we’d not be seeing magnified polarization wouldn’t be that science curiosity stifles identity-protective cognition. It would be that it simply lacks the power to enhance identity-protective reasoning associated with elements of critical reasoning that make one genuinely more proficient in making sense (or anti-sense, if that’s what protecting one’s identity requires) of scientific data.
This is for sure a very pertinent, appropriate follow-up response to the post!
I gestured toward it my original post, actually, by saying that I had run some analyses that looked at the interaction of science comprehension and science curiosity. The aim of those analyses was to figure out if the effect of increasing science curiosity in arresting increased polarization is conditional on the level of subjects’ science comprehension. But I didn’t report those analyses.
Well, here they are:
What these loess (locally weighted regression) analyses suggest is that the impact of science curiosity is pretty much uniform at all levels of science comprehension as measured by the Ordinary Science Intelligence assessment.
There is obviously a big gap in “belief in human-caused climate change” among individuals who vary in science comprehension.
But whether someone is in the top 1/2 of or the bottom 1/2 of science comprehension-- indeed, whether someone is in the bottom decile or top decile of science comprehension-- greater science curiosity predicts a greater probability of agreeing that human beings are the principal cause of climate change, regardless of one's political outlooks.
We can discipline this visual inference by modeling the data:
This logistic regression confirms that there is no meaningful interaction between science curiosity (SCS) and science comprehension (OSI_i). The coefficients for the cross-product interaction terms for science curiosity and science comprehension (OSIxSCS ) and for science curiosity, science comprehension, and political outlooks (crxosixscs) are all trivially different from zero.
In other words, the impact of science curiosity in increasing the probability of belief in human-caused climate change (b = 0.31, z = 5.51) is pretty much uniform at every level of science comprehension regardless of political orientation.
Here’s a graphic representation of the regression output (one in which I’ve omitted the cross-product interaction terms, the inclusion of which would add noise but not change the inferential import of the analyses):
Again, science comprehension for sure magnifies polarization.
But at every level of science comprehension, science curiosity has the same impact (reflected in the slope of the plotted predicted probabilities): it promotes greater acceptance of human-caused climate change--among both "liberal Democrats" and "conservative Republicans."
So this is evidence, I think, that is inconsistent with @AndyWest’s surmise. It suggests the power of science curiosity--alone among science-reasoning proficiencies--to constrain magnification of polarization is not a consequence of the dearth of high science-comprehending individuals among the segment of the population that is most science curious.
On the contrary, the polarization-constraining effect of science curiosity extends to those even at the highest level of cience comprehension.
@AndyWest had suggested that an analysis like this be carried out among individuals highest in “OCSI”—the “Ordinary Science Comprehension Intelligence” assessment. This data set doesn’t have OCSI scores in it. But I do know that there is a pretty decent positive correlation between OSI and OCSI (particularly OSI and the new OCSI_2.0, to be unveiled soon!), so it seems pretty unlikely to me the results would be different if I had looked for an OCSI-SSC rather than an OSI-SSC interaction.
Still, I don’t think this “settles” anything really. We need more fine-grained data, as I’ve emphasized, throughout.
But this closer look at the data at hand does nothing to dispel the intriguing possibility that science curiosity might well be a disposition that negates identity-protective cognition.
More” tomorrow” on science curiosity and “political moderation.”
Incentives and politically motivated reasoning: we can learn something but only if we don't fall into the " 'external validity' trap"
From revision to "The Politically Motivated Reasoning Paradigm" paper. Been meaning to address the interesting new studies on how incentives affect this form of information processing. Here's my (provisional as always) take. It owes a lot to helpful exchanges w/ Gaurav Sood, who likely disagrees with everything I say; maybe I can entice/provoke him into doing a guest post! But in any case, his curiosity & disposition to acknowledge complexity equip him both to teach & learn from others regardless of how divergent his & their "priors."
6. Monetary incentives
Experiments that reflect the PMRP design are “no stake” studies: that is, subjects answer however they “feel” like answering; the cost of a “wrong” answer and the reward for a “correct “one are both zero. In an important development, several researchers have recently reported that offering monetary incentives can reduce or eliminate polarization in the answers that subjects of diverse political outlooks give to questions of partisan import (Khanna & Sood 2016; Prior, Sood & Gaurav 2015; Bullock, Gerber, Hill & Huber 2015).
The quality of these studies is uneven. The strongest, Khanna & Sood (2016), uses the PMRP design. K&S show that offering incentives reduces the tendency of high numeracy subjects to supply politically biased answers in interpreting covariance data in a gun-control experiment, a result reported in Kahan et al. (2013) and described in Section 4.
PSG and BGHH, in contrast, examine subject responses to factual quiz questions (e.g., “. . . has the level of inflation [under President Bush] increased, stayed the same, or decreased?”;“how old is John McCain?,” (Bullock et al. 2015, pp. 532-33)). Because this design does not involve information processing, it doesn’t show how incentives affect the signature feature of politically motivated reasoning: the opportunistic adjustment of the weight assigned to new evidence conditional on its political congeniality.
Both K&S and BGHH, moreover, use M Turk worker samples. Manifestly unsuited for the study of politically motivated reasoning generally (see Section 3.3), M Turk samples are even less appropriate for studies on the impact of incentives on this form of information processing. M Turk workers are distinguished from members of the general population by their willingness to perform various forms of internet labor for pennies per hour. They are also known to engage in deliberate misrepresentation of their identities and other characteristics to increase their on-line earnings (Chandler & Shapiro 2016). Thus, how readily they will alter their reported beliefs in anticipation of earning monetary rewards for guessing what researchers regard as “correct” answers furnishes an unreliable basis for inferring how members of the general public form beliefs outside the lab, with incentives or without them.
But assuming, as seems perfectly plausible, that studies of ordinary members of the public corroborate the compelling result reported in K&S, a genuinely interesting, and genuinely complex, question will be put: what inference should be drawn from the power of monetary incentives to counteract politically motivated reasoning?
BGHH assert that such a finding would call into doubt the external validity of politically motivated reasoning research. Attributing the polarized responses observed in “no stake” studies to the “expressive utility that [study respondents] gain from offering partisan-friendly survey responses,” BGHH conclude that the “apparent gulf in factual beliefs between members of different parties may be more illusory than real” (Bullock et al., pp. 520, 523).
One could argue, though, that BGHH have things exactly upside down. In the real world, ordinary members of the public don’t get monetary rewards for forming “correct” beliefs about politically contested factual issues. In their capacity, as voters, consumers, or participants in public discussion, they don’t earn even the paltry expected-value equivalent of the lottery prizes that BGHHG offered their M Turk worker subjects for getting the “right answer” to quiz questions. Right or wrong, an ordinary person’s beliefs are irrelevant in these real-world contexts, because any action she takes based on her beliefs will be too inconsequential to have any impact on policymaking.
The only material stake most ordinary people have in the content of their beliefs about policy-relevant facts is the contribution that holding them makes to the experience of being a particular sort of person. The deterrent effect of concealed-carry laws on violent crime, the contribution of human activity to global warming, the impact of minimum wage laws on unemployment—all of these are positons infused with social meanings. The beliefs a person forms about these “facts” reliably dispose her to act in ways that others will perceive to signify her identity-defining group commitments (Kahan in press_a). Failing to attend to information in a manner that generates such beliefs can have a very severe impact on her wellbeing—not because the beliefs she’d form otherwise would be factually wrong but because they would convey the wrong message about who she is and whose side she is on. The interest she has in cultivating beliefs that reliably summon an identity-expressive affective stance on such issues is what makes politically motivated reasoning rational.
No-stake PMRP designs seek to faithfully model this real-world behavior by furnishing subjects with cues that excite this affective orientation and related style of information processing. If one is trying to model the real-world behavior of ordinary people in their capacity as citizens, so-called “incentive compatible designs”—ones that offer monetary “incentives” for “correct” answers”—are externally invalid because they create a reason to form “correct” beliefs that is alien to subjects’ experience in the real-world domains of interest.
On this account, expressive beliefs are what are “real” in the psychology of democratic citizens (Kahan in press_a). The answers they give in response to monetary incentives are what should be regarded as “artifactual,” “illusory” (Bullock et al., pp. 520, 523) if we are trying to draw reliable inferences about their behavior in the political world.
It would be a gross mistake, however, to conclude that studies that add monetary incentives to PMRP designs (e.g., Khanna & Sood 2016) furnish no insight into the dynamics of human decisionmaking. People are not merely democratic citizens, not only members of particular affinity groups, but also many other things, including economic actors who try to make money, professionals who exercise domain-specific expert judgments, and parents who care about the health of their children. The style of identity-expressive information processing that protects their standing as members of important affinity groups might well be completely inimical to their interests in these domains, where being wrong about consequential facts would frustrate their goals.
Understanding how individuals negotiate this tension in the opposing “stakes” they have in forming accurate beliefs and identity-expressive ones is itself a project of considerable importance for decision science. The theory of “cognitive dualism” posits that rational decisionmaking comprises a capacity to employ multiple, domain-specific styles of information processing suited to the domain-specific goals that individuals have in using information (Kahan 2015b). Thus, a doctor who is a devout Muslim might process information on evolution in an identity-expressive manner “at home”—where “disbelieving” in it enables him to be a competent member of his cultural group—but in a truth-seeking manner “at work”—where accepting evolutionary science enables him to be a competent oncologist (Hameed & Everhart 2013). Or a farmer who is a “conservative” might engage in an affective style of information processing that evinces “climate skepticism” when doing so certifies his commitment to a cultural group identified with “disbelief” in climate change, but then turn around and, join the other members of that same cultural group in processing such information in a truth-seeking way that credits climate science insights essential to being a successful farmer (Rejesus et al. 2013).
If monetary incentives do meaningfully reverse identity-protective forms of information processing in studies that reflect the PMRP design, then a plausible inference would be that offering rewards for “correct answers” is a sufficient intervention to summon the truth-seeking information-processing style that (at least some) subjects use outside of domains that feature identity-expressive goals. In effect, the incentives transform subjects from identity-protectors to knowledge revealers (Kahan 2015a), and activate the corresponding shift in information-processing styles appropriate to those roles.
Whether this would be the best understanding of such results, and what the practical implications of such a conclusion would be, are also matters that merit further, sustained emirical inquiry. Such a program, however, is unlikely to advance knowledge much until scholars abandon the pretense that monetary incentives are the “gold standard” of experimental validity in decision science as opposed to simply another methodological device that can be used to test hypotheses about the interaction of diverse, domain-specific forms of information processing.
Chandler, J. & Shapiro, D. Conducting Clinical Research Using Crowdsourced Convenience Samples. Annual Review of Clinical Psychology (2016), advance on-line publication at http://www.annualreviews.org/doi/abs/10.1146/annurev-clinpsy-021815-093623.
Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).
The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences, (in press).
Khanna, Kabir & Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (working), available at http://www.gsood.com/research/papers/partisanlearning.pdf.
Prior, M., Sood, G. & Khanna, K. You Cannot be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions. Quarterly Journal of Political Science 10, 489-518 (2015).
Rejesus, R.M., Mutuc-Hensley, M., Mitchell, P.D., Coble, K.H. & Knight, T.O. US agricultural producer perceptions of climate change. Journal of agricultural and applied economics 45, 701-718 (2013).
Weekend update: Disentanglement Principle, A Lesson from SE Fla Climate Political Science Lecture (& slides)
A presentation I gave at a meeting of the Institute for Sustainable Communities, a major partner of the Southeast Florida Regional Climate Compact. Synthesizes research CCP (with generous support from the Skoll Global Threats Foundation) has done to support Compact science communication.
If the Compact members have learned from our research even 10^-2 of what they've taught us about what "climate change" means and what it takes to have the right conversation & banish the wrong conversation about it, then I'll feel we've done something pretty important. Even more important will be to help others learn the lessons of Southeast Florida Climate Political Science . . . .
Science curiosity and identity-protective cognition ... a glimpse at a possible (negative) relationship
So here is a curious phenomenon: unlike pretty much every other science-related reasoning disposition, science curiosity seems to avoid magnifying identity-protective cognition!
Let's start with a bunch of culturally contested societal risks, ones on which political polarization can be assessed with the ever-handy Industrial Strength Risk Perception Measure:
For each risk, the paired panels chart the risk-perception impact of greater science comprehension and greater science curiosity (in each case “controlling for” the influence of the other), respectively. They estimate those effects separately, moreover, for a "liberal Democrat" and for a "conservative Republican," designations determined by reference to the study subejects' scores on a composite political ideology and party-identification scale.
As science comprehension (measured with the Ordinary Science Intelligence assessment) increases, so too does the degree of polarization on politically contested risks involving climate change, gun possession, fracking, marijuana legalization, pornography, and the like.
That’s not a surprise. The warping effect of identity-protective cognition on cognitive reflection, numeracy, science comprehension and all other manner of critical reasoning proficiency has been exhaustively chronicled, and lamented, in this blog.
But that’s not what happens as science curiosity increases. On the contrary, in all cases, greater science curiosity has the same general risk-perception impact—in some cases enhancing concern, in some blunting it, and in others having no directional effect—for study respondents of politically diverse outlooks.
Science curiosity is being measured for these purposes with the CCP/Annenberg Public Policy Center “Science Curiosity Scale,” or SCS_1.0.
SCS_1.0 was developed for use in the CCP/APPC “Evidence-based Science Filmmaking Initiative.” Previous posts have discussed the development and properties of this measure, including its ability to predict engagement with science documentaries and other forms of science information among diverse groups.
So has its relationship to random other non-science related activities, such as taking a peek at what goes on at gun shows and even cracking open a book on religion now & again.
But this feature of SCS_1.0—its apparent ability to defy the gravitational pull of identity-protective cognition on perceptions of disputed risks—is something I didn’t anticipate. . . .
Indeed, I really don’t want to give the impression that I “get” this, it makes “perfect sense,” etc. Or even that there’s necessarily a “there” there.
An observation like this is just corroboration of the fundamental law of the conservation of perplexity, which refers to the inevitable tendency of valid empirical research to generate one new profound mystery (at least one!) for every mystery that it helps to make less perplexing (anyone who thinks “mysteries” are ever solved by empirical inquiry has a boring conception of “mystery” or, more likely, a misconception of how empirical research works).
But here are some thoughts:
1. It does in fact make sense to think of curiosity as the cognitive negation of motivated reasoning. The latter disposition consists in the unconscious impulse to conform evidence to beliefs that serve some goal (like cultural identity protection) unrelated to figuring out the truth about some uncertain factual matter. Curiosity, in contrast, is an appetite not only to know the truth but to be surprised by it: it consists in a sense of anticipated pleasure in being shown that the world works in a manner that is astonishingly different from what one had thought, and in being able to marvel at the process that made it possible for one to see that.
When one is in that state, the sentries of identity-protect are necessarily standing down. The path is clear for truth to march in and enlighten . . . .
2. At the same time, these data are pretty baffling to me. No way did I expect to see this.
The affinity between identity-protective cognition and critical reasoning, I’m convinced, reflects the role the former plays in the successful negotiation of social interactions. Where positions on disputed issues of risk become entangled in social meanings that transform them into badges of membership in and loyalty to opposing cultural groups, it is perfectly rational, at the individual level, for people to adopt styles of information processing that conduce to formation of beliefs that express their tribal allegiances.
Indeed, not to attend to information in this manner would put normal people—one’s whose personal beliefs about climate change or fracking or gun control don’t have any material impact on the risks they or anyone else face—in serious peril of ostracism and ridicule within their communities.
I’d essentially come to the bleak, depressing, spirit-enervating conclusion, then, that the only reasoning disposition likely to blunt the force of identity-protective cognition was a social disability in the nature of autism.
But now, for the 14 billionth time, I will have to rethink and reconsider.
Because clearly the appetite to seek out and consume information about the insights human beings have acquired through the use of science’s signature methods of disciplined observation and inference is no reasoning disability. And those who in who are most impelled to satisfy this appetite are clearly not using what they learn to forge even stronger links between their perceptions of how the world works and the views that express membership in their identity-defining affinity groups.
Or at least that’s one way to understand evidence like this. Pending more investigation.
3. All sorts of qualifications are in order.
a. For one thing, SCS_1.0 is a work in progress. Additional tests to refine and validate it are in the works.
b. For another, science comprehension and science curiosity are not wholly unrelated! Actually, they aren’t strongly related; in the data set from which these observations come, the correlation is about 0.3. But that's not zero!
I actually tested for “interactions” between science comprehension (as measured by OSI_2.0) and science curiosity (as measured by SCS_1.0), and between the two of them and political outlooks. The interactions were all pretty close to zero; they wouldn’t affect the basic picture I’ve shown you above (but I am happy to show more pictures—just tell me what you want to see).
Still I don’t think the effect of science curiosity on identity-protective cognition can be made sense of without closer, more fine-grained examination of how much it alters the trajectory of polarization at different levels of science comprehension.
c. Also, the impact of science curiosity is interesting only because it doesn’t magnify polarization. It doesn’t make it go away, as far as I can tell. That’s important—for the reasons stated. But a reasoning disposition that generated convergence among individuals of diverse cultural outlooks on culturally contested risks (as science comprehension does on culturally uncontested ones) would be much more remarkable—and important.
We should be looking for that. I’d say, though, that looking even harder at curiosity might help us detect if there is such a reasoning quality—the ”Ludwick factor” is the technical term for those who’ve speculated on its possible existence . . .—and how it might be disseminated and stimulated.
For surely, that is a reasoning disposition the cultivation of which should be cultivated in the citizens of the Liberal Republic of Science.
But in the meantime, this unexpected, intriguing relationship can be contemplated by curious people with excitement and perplexity and with a desire to figure out even more about it.
So what do you think?
From something I'm working on. Anyone of the 14 billion regular readers of this blog could fill in the rest. But if you are one of 1.3 billion people who on any given day visit this site for the first time, there's more on the "'Two climate changes' thesis" here & here, among other places. . . .
America’s two “climate changes”
There are two climate changes in America: the one people “believe” or “disbelieve” in in order to express their cultural identities; and the one people ("believers" & "disbelievers" alike) acquire and use scientific knowledge about in order to make decisions of consequence, individual and collective. I will present various forms of empirical evidence—including standardized science literacy tests, lab experiments, and real-world field studies in Southeast Florida—to support the “two climate changes” thesis. I will also examine what this position implies about the forms of deliberative engagement necessary to rid the science communication environment of the toxic effects of the first climate change and to make it habitable for enlightened democratic engagement with the second.
Do science curious evolution believers and science curious nonbelievers both like to go to the science museum? How about to gun shows?
I've described highlights from the first study (a more complete report on which can be downloaded here) in some earlier posts. They include the development of a behaviorally validated "science curiosity" scale (one that itself involves performand and behavioral measures and not just self-reported interest ones), and the successful use of that scale to predict "engagement" --measured behaviorally, and not just with self-reported interest--in the cool Tangled Bank Studios documentary on evolution, Your Inner Fish.
Stay tuned for more reports about our findings in this ongoing project.
But for now, consider these interesting findings about the power of "SCS_1.0," the science curiosity scale we constructed, to predict one or another types of behavior.
The graphic shows, not surprisingly, that those who are more science curious are way more likely to do things like read science books and attend science museums.
Probably not that surprisingly, they might be slightly more likely to do other things, too, like go to an amusement park-- or even a gun show than science uncurious people. But they really aren't much more likely to do those thngs than the average member of the population.
In addition to estimating the predicted mean probabilities for these activities conditional on science curiosity for the entire sample (a large nationally represenative one), I've also estimated the predicted mean probabilities for individuals who say they "do" and "don't believe in" human evolution:
One of the coolest things we found in ESFI Study No. 1 was that science curious individuals who "disbelieve in" evolution were just as engaged as science curious individuals who do believe in evolution. In addition, they were both substantially more engaged than their science-noncurious counterparts, most of whom yawned and turned the show off after a couple of minutes, no doubt hoping that the survey would resume its focus on Honey Boo Boo, "Inflate-gate," and other non-science related topics used to winnow out those less interested in science than in other interesting things.
Individuals who "disbelieve" in evolution but who were high in science curiosity also indicated that they found the information in the documentary clip valid and convincing as an account of the origins of human color vision.
Of course, that didn't "change their minds" on evolution. Their beliefs on that measure who they are—not what they know about science or what more they’d like to know about what human beings have discovered using science's signature methods of disciplined observation and inference. The experience of watching the cool Your Inner Fish clip satisfied their appetite to know what science knows but it didn't make them into different people!
Indeed, I think it likely succeeded in the former precisely because it didn't evince any interest in accomplishing the latter. It didn't put science curious people who have an identity associated with disbelief in evolution in the position of having to choose between being who they are and knowing what science knows.
Satisfying this criterion, which I've taken to calling the "disentanglement principle," is, I believe, a key element of successful science communication in pluralistic liberal society (Kahan 2015a, 2015b).
Anyway, check out what evolution believers & disbelievers do in their free time conditional on having the same level of science curiosity.
Many of the same things -- but not all!
I have ideas about what this means. But I'm out of time for today! So how about you tell me what you make of this?
Plata's Republic: Justice Scalia and the subversive normality of politically motivated reasoning . . . .
. . . Plata's Republic . . .
Civis: It is “fanciful,” you say, to think that three district court judges “relied solely on the credibility of the testifying expert witnesses” in finding that release of the prisoners would not harm the community?
Cognoscere Illiber: Yes, because “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.”
Civis: “Of course” judges with “different policy views” would have formed different beliefs about the consequences if they had evaluated the same expert evidence? Why? Surely the judges, like all nonspecialists, would agree that these are matters outside their personal experience. Are you saying the judges would ignore the experts and decide on partisan grounds?
Cognoscere Illiber: No. “I am not saying that the District Judges rendered their factual findings in bad faith. I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” on such matters. The “expert witnesses” here were of the sort trained to make “broad empirical predictions”—like whether “deficit spending will . . . lower the unemployment rate” or “the continued occupation of Iraq will decrease the risk of terrorism.”
Civis: But people normally assert that their policy positions on criminal justice, economic policy, and national security are based on empirical evidence. It almost sounds as if are you saying things are really the other way around—that what they understand the empirical evidence to show is “necessarily based in large part upon policy views.”
Cognoscere Illiber: Exactly what I am saying! Those sorts of “factual findings are policy judgments.” Thus, empirical evidence relating to the consequences of law should be directed to “legislators and executive officials”—not “the Third Branch”—since in a democracy it is the people’s “policy preferences,” not ours, that should be “dress[ed] [up] as factual findings.”
Civis: Ah. Thanks for telling me—I had been naively taking all the empirical arguments in politics at face value. Silly me! Now I see, too, that those naughty judges were just trying to exploit my gullibility about policy empiricism. Shame on them!
 Plata, 131 S. Ct. at 1954 (Scalia, J., dissenting).
 Id. at 1954-55.
 Id. at 1954.
 Id. at 1955.
* * *
Brown v. Plata was among the most consequential decisions of the 2010 Term—in multiple senses. In Plata, California attacked an order, issued by a three-judge federal district court, directing the state to release more than 40,000 inmates from its prisons. It was not disputed that California prisons had for over a decade been made to store double their intended capacity of 80,000 inmates. The stifling density of the population inside—“200 prisoners . . . liv[ing] in a gymnasium,” sleeping in shifts and “monitored by two or three guards”; “54 prisoners . . . shar[ing] a single toilet”; “50 sick inmates . . . held together in a 12- by 20-foot” cell; “suicidal inmates . . . held for prolonged periods in telephone-booth sized cages” ankle deep in their own wastes—was amply documented (with photographs, appended to the Court’s opinion, among other things). The awful effect on the prisoners’ mental and physical health was indisputable, too (“it is an uncontested fact that, on average, an inmate in one of California’s prisons needlessly dies every six to seven days”). These conditions, the district court concluded, violated the Eighth Amendment. The district court also saw that there was no prospect whatsoever that the state, having repeatedly rejected prison-expansion proposals and now in a budget crisis, would undertake the massive expenditures necessary to increase prison capacity and staffing. Accordingly, it ordered the only relief that, to it, seemed, possible: the release of the number of inmates that the court deemed sufficient to bring the prison’s into compliance with minimally acceptable constitutional standards.
The Supreme Court, in a five to four decision, affirmed. The major issue of contention between the majority and dissenting Justices was what consequence the ordered prisoner release would have on the public safety, a consideration to which the district court was obliged to give “substantial weight’” by the Prison Litigation Reform Act of 1995. The district court devoted 10 days of the 14-day trial to receiving evidence on this issue, and concluded that use of careful screening protocols would permit the state to release the necessary number of inmates “in a manner that preserves public safety and the operation of the criminal justice system.”
The determinations underlying this finding, Justice Kennedy noted in his majority opinion, “are difficult and sensitive, but they are factual questions and should be treated as such.” The district court had “rel[ied] on relevant and informed expert testimony” by criminologists and prison officials, who based their opinion on “empirical evidence and extensive experience in the field of prison administration.” Indeed, some of that evidence, Justice Kennedy observed, had “indicated that reducing overcrowding in California’s prisons could even improve public safety” by abating prison conditions associated with recidivism. Like its other findings of fact, the district court’s determination that the State could fashion a reasonably safe release plan was not “clearly erroneous.”
The idea that the district court’s public safely determination was a finding of “fact” entitled to deferential review caused Justice Scalia to suffer an uncharacteristic loss of composure. Deference is due factfinders because they make “determination[s] of past or present facts” based on evidence such as live eyewitness testimony, the quality of which they are “in a better position to evaluate” than are appellate judges confined to a “cold record,” he explained. The public-safety finding of the three-judge district court, in contrast, consisted of “broad empirical predictions necessarily based in large part upon policy views.” “The idea that the three District Judges in this case relied solely on the credibility of the testifying expert witnesses is fanciful,” Scalia thundered.
Justice Scalia’s reaction to the majority’s reasoning in Plata is reminiscent of Wechsler’s to the Court’s in Brown. Like Scalia, Wechsler had questioned whether the finding in question—that segregated schools “retard the educational and mental development” of African American children—could bear the decisional weight that the Court was putting on it. But whereas Wechsler had only implied that the Court was hiding its moral-judgment light under an empirical basket—“I find it hard to think the judgment really turned upon the facts [of the case]”—Scalia was unwilling to bury his policymaking accusation in a rhetorical question. “Of course they [the members of the three-judge district court] were relying largely on their own beliefs about penology and recidivism” when they found that release was consistent with—indeed, might even enhance—public safety, Scalia intoned. “And of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “[I]t is impossible for judges to make ‘factual findings’ without inserting their own policy judgments, when the factual findings are policy judgments.”
Justice Scalia’s dissent is also akin to the reaction to “empirical factfinding” in the Supreme Court’s abortion jurisprudence. Justice Blackmun’s majority opinion in Roe v. Wade cited “medical data” supplied by “various amici” to demonstrate that “[m]odern medical techniques” had dissolved the state’s historic interest in protecting women’s health. “[T]he now-established medical fact . . . that until the end of the first trimester mortality in abortion may be less than mortality in normal childbirth” supported recognition of an unqualified right to abortion in that period. Ely, among others, challenged the Court’s empirics: “This [the medical safety of abortions relative to childbirth] is not in fact agreed to by all doctors—the data are of course severely limited—and the Court's view of the matter is plainly not the only one that is ‘rational’ under the usual standards.” In any case, “it has become commonplace for a drug or food additive to be universally regarded as harmless on one day and to be condemned as perilous on the next”—so how could “present consensus” among medical experts plausibly ground a durable constitutional right?
It can’t. “[T]ime has overtaken some of Roe’s factual assumptions,” the Court noted in Planned Parenthood of Southeastern Pennsylvania v. Casey. “[A[dvances in maternal health care allow for abortions safe to the mother later in pregnancy than was true in 1973, and advances in neonatal care have advanced viability to a point somewhat earlier.” Accordingly, culturally fueled enactments of and challenges to abortion laws continue—repeatedly confronting the Justices with new empirical questions to which their answers are denounced as motivated by “personal values.” * * *
The only citizens who are likely to see the Court’s decision as more authoritative and legitimate when it resorts to empirical fact-finding in culturally charged cases are the ones whose cultural values are affirmed by the outcome. * * *
This factionalized environment incubates collective cynicism—about both the political neutrality of courts and about the motivations behind empirical arguments in policy discourse generally. Indeed, Justice Scalia’s extraordinary dissent in Plata synthesizes these two forms of skepticism.
It was “fanciful,” Scalia asserted, to think that the three district court judges “relied solely on the credibility of the testifying expert witnesses.” One might, at first glance, see him as merely rehearsing his standard diatribe against “judicial activism.” But this is actually a conclusion that Scalia deduces from premises—ones that don’t enter into his standard harangue—about the nature of empirical evidence and policymaking. The experts’ testimony, he explains, dealt with “broad empirical predictions”—ones akin to whether “deficit spending will . . . lower the unemployment rate,” or whether “the continued occupation of Iraq will decrease the risk of terrorism.” For Scalia, the beliefs one forms on the basis of that sort of evidence are “inevitably . . . based in large part upon policy views.” It follows that “of course different district judges, of different policy views, would have ‘found’ that rehabilitation would not work and that releasing prisoners would increase the crime rate.” “I am not saying,” Justice Scalia stresses, “that the District Judges rendered their factual findings in bad faith.” “I am saying that it is impossible for judges to make ‘factual findings’ without inserting their own policy judgments” when assessing empirical evidence relating to the consequences of governmental action. , when the factual findings are policy judgments.”
In effect, Scalia is telling us to wise up, not to be snookered by the Court. Sure, people claim that their “policy positions” on matters such as crime control, fiscal policy, and national security are based on empirical evidence. But we all know that things are in fact the other way around: what one makes of empirical evidence is “inevitably” and “necessarily based . . . upon policy views.” At one point, Scalia describes the district court judges as having “dress[ed]-up” their “policy judgments” as “factual findings.” But those judges weren’t, in his mind, doing anything different from what anyone “inevitably” does when making “broad empirical predictions”: those sorts of “factual findings are policy judgments.” Empirical evidence on the consequences of public policy should be directed to “legislators and executive officials” rather than “the Third Branch,” Scalia insists. The reason, though, isn’t that the former are better situated to draw reliable inferences from the best available data. On the contrary, it is that it is a conceit to think that reliable inferences can possibly be drawn from empirical evidence on policy consequences—and so “of course” it is the “policy preferences” of the majority, rather than those of unelected judges, that should control.
It is hard to say what is more extraordinary: the substance of Scalia’s position or the knowing tone with which he invites us to credit it. One might think it would be shocking to see a Justice of the Supreme Court so brazenly deny the intention (capacity even) of democratically accountable officials to make rational use of science to promote the common good. But Scalia could not expect his logic to persuade unless he anticipated that readers would readily concur (“of course”) that empirical arguments in policy debate are a kind of charade.
Scalia, of course, had good reason to expect such assent. His argument reflects the perspective of someone inside the cogntively illiberal state—who senses that motivated reasoning is shaping everyone else’s perceptions, and who accepts that it must also be shaping his, even if at any given moment he is unaware of its influence. We have all experienced this frame of mind. The critical question, though, is whether we really believe that what we are experiencing when we feel this way is inevitable and normal—a style of collective engagement with empirical evidence that should in fact be treated as normative, as Scalia asserts, for the performance of our institutions. I don’t think that we do . . . .